 OK, so, so, What we have done up to now is discovered that our problem in a dimension, in the half space in dimension d plus 1 is equivalent to problem in dimension d. Let's see, OK, let me erase. So now we are in dimension d. So understanding minimizers of our original problem is equivalent to understand minimizers of, let's see if I have the following energy. Yeah, minimizers of e1 of v, where this is the one-half, the norm of v h1-half square plus the integral of f of v. Now, OK, one can compute the real Lagrange equation, so you just say v over d epsilon epsilon equals 0 of v1 of v plus epsilon sorry, v1 of u plus epsilon w. You do the derivative and you get minus Laplacian one-half minus u plus f of u w equals 0, where this is the integral of u of x minus u of y, x minus y to the d plus 1 dy. So this is the minus Laplacian one-half of u of x. And you have seen many of this, right? So you just differentiate the Lagrange equation and you integrate by parts in some sense, or you just use the symmetry. And here f is the derivative of capital F. When you do u plus epsilon v, you differentiate, you get f of u times v. So these are the Lagrange equation. An equation as v4, the difference is that instead of having the Laplacian, now we have the half Laplacian. And now let's try to make the same analysis as we did before. So assume that f of v is 1 minus v square over 4, like in Allen-Kann, then if u is a minimizer of e1, then u epsilon of x defined as u of x over epsilon, this is a minimizer of this new energy is a minimizer of e epsilon of v, which is, I have to check now the numerology, 1 half norm of v h1 square plus 1 over 4 epsilon integra, 1 minus v square. So this energy is not so it's a minimizer of this energy, but you see if I'm a minimizer of this energy, I'm a minimizer of every multiple of this energy, so I can also divide by 1 over log of epsilon in front, I'm also a minimizer of this. And why I do this, because now also my u epsilon minimize this energy, and then there is the following theorem as epsilon tends to 0 e epsilon gamma converges again to the perimeter. And similarly u epsilon converges in l1 log to characteristic of e minus characteristic of rd minus z e local minimizer of perimeter, so classical perimeter no fractional anything, we're back to the class to the same result as before, it's a minimizer of the perimeter. Ok, so once you look at these then you can just same as the Georgi conjecture, should hold, so if because of this analogy everything I told you in the last lecture you can just repeat it replacing the Laplacian with one half Laplacian, because anyhow everything was just motivated by this gamma convergence result, and you'd still converge to the perimeter, so why not? So you can expect the same result as before to hold, and in fact there have been some positive answer, so now I can erase now the Georgi, we've seen it. Alessio, actually you have a whole bunch of fractional Laplacians, all the ones, depending on how you write the exponent d plus 2s I'm interested in one half. No, I understand I respect that, but I was just noticing this in principle you have, you can say qualitatively you can make this remark with a different scaling over there for all these Laplacian, but then if you Laplacian is between one half and one you get the perimeter, if the Laplacian is less than one half you get the fractional perimeter. Ok, where you don't have the picture for the angular surfaces. So a priori you could state this statement for every fractional Laplacian between one half and one included. Ok, that's what I was saying. In particular, one half is the borderline case, so s becomes less than one half, you don't get the perimeter anymore, you get the fractional perimeter. Ok, so let's state the results. So d equal to, there has been a paper by Cabret and Sola Morales in O5, then d equal to 3 Cabret and Sinti in 2010 and then if you go in dimension between 4 and 8 there is a paper by Savin where in fact what he does is does the result for the fractional Laplacian for every s between one half and one except one half and then he announced that the case one half is doing it. In this paper I do one half between one half and one because it's easier, but now the method I'm doing also one half. So Savin announced again under the assumption that you if you tend to plus and minus one as xd tends to plus and minus infinity. So again, same assumption as for the classical Laplacian, you need to assume that the infinity of u is plus and minus one, so in particular u is a local minimizer. Ok, so this is just the generalization of Savin also for this Laplacian. There are also versions in the fractional case between one half in fact of this result, so the only one which is not being written but announced is the one half case because it's the border line that's a bit more annoying. Ok, let me recall you now that the case of stable solutions are 1d in rd minus one, then monotone are 1d in rd. I had this discussion that there is this discussion I made before. If you can prove something for stable in some dimension, you get for monotone solution one dimension more. So here I can find statement of the result recently proved with Joaquin Serra. So you take let u from r3 into r be a stable solution of minus Laplacian one half of u equal f of u. u less or equal than one, so use bond for some f let's say c0 half and f which is c0 half. Then u is 1d. So we proved that in dimension 3 stable solutions are always one dimension. And in fact it doesn't even matter what you put as a right hand side it doesn't matter that this is all in kan. You get the result. And this is something that was already observed not for stable but for the Georgi. So in the classical case by Alberti Ambrosio and Chavi Cabre in 2001 that this kind of rigidity results very often the old effect for every f. So even if the conjecture is motivated by line kan so you would expect that it's really crucial that the energy is square over four. I mean something that gamma converges to the perimeter in the end no matter what f you put this results seem to be two. So at least the results for the classical Laplacian dimension 3 holds for every f and now we get it for every f also in this case. So remark. So this is the analog of the result for minimal surface in dimension 3. The result I stated by Schoen, so this is the analog of the result that is saying that the stable surface in dimension 3 are planes. And remark this is open for the Laplacian. So if here you put Laplacian the result is open because you see we are doing stable solution in dimension 3 corollary de Georgi true for d equal 4. So in dimension 5 the monotron in dimension 4 which for the classical Laplacian is open. Savin can do it under this assumption but there is no the Georgi yet. So in the remaining 45 minutes I will try to give you as many elements as I can of the proof. So now we start to do something a bit more interesting on the one end and maybe less commentary and user friendly. But I will try to emphasize the many ideas. So we want to prove this theorem. There are 3 steps that we want to prove for this result. So let me mention the step and then I will tell you what are the technical tools and we try to get the proof. So strategy of proof. So step 1. Step 1 consists is proving the following. Minus Laplacian 1 half of u plus f of u equal f of u, doesn't matter equal f of u in rd any dimension stable then for every r the integral over br of gradu is bounded by constant r to the d minus 1 log r and the double integral over br br of u of x minus u of y square x minus y over d plus 1 dx dy is bounded by constant r to the d minus 1 log square r. So this is the integral of the gradient of u in a ball of result and this is the h1 half norm in a ball of result more or less. I mean it's not exactly this formula in fact, but that's what we get. So this is true in every dimension. So the step 2 if d equal 3 we can remove in that estimate a log r. So here you get the same estimate without log and here you get this estimate with only one log and not log square. And actually these are optimal. So if you test these estimates for the 1d solution you get exactly that growth. So that's the sharp estimate without the log. And then once you have the optimal growth we prove that u is one dimension. So these are the three key points and let me start with the proof. Let me start with the proof of step 1 let's see what I want to write. So there are several things that I need to use but probably let me start to make some comments. One of the things we use in this step is the following lemma. So on the one hand you have a very natural object appearing in all these quantities which is the h1 half norm. The h1 half norm is just there because it's the energy. Then in fact, and this is something that was already discovered previously in the setting of non-local mineral surfaces, when you kind of try to use stability you can make appear the classical, let's say, l1 norm of the gradient of the solution. So by stability this will appear. So quantity like this appears from the stability effect. The h1 half norm is there. And now you want to play with these quantities. The key point is that these are not the same but they are very much comparable. How they are comparable and it's crucial that there is such a result. So we prove the following lemma, we prove it, I'm not claiming originality, maybe probably people knew it, but it's the kind of interpolation in quality that is difficult to find usually around. You have a function u bounded by 1 and you assume that the gradient of u is bounded in an infinity of b2, let's say, is bounded by constant l, let me change the name. So let v be a function which is bounded by 1 and is Lipschitz in b2 for some sense constant l0 that is greater than 2, number which is not too small just because I want to put the log. Then norm of v in h1 half of b1 square is bounded by constant log L0 1 plus the integral over b2 of the gradient of v. So this inequality is telling you that the norm of the gradient cannot control the h1 half norm so it can do it if you put a logarithm of the Lipschitz constant. And here the key point that is going to save us is that first of all we get log here so if you put log square and you repeat our proof it doesn't work so it's a little log to the power 1. Here we have a power 2, this point is a square, is a double integral, it's quadratic in v, while this is linear in v. So that's also something that's going to help us in a crucial way. I will not prove this inequality. The proof is not super complicated but you need to play a bit with some Fourier. This is a general lemma. Given any function bounded by 1, Lipschitz constant bounded by L, the h1 half norm is controlled by the bV norm, the integral grad. I think it's an example, I think it's more instructive. And you see why the log. Let's do one dimension, so example. Let's think of this case, v. So I'm in an interval, let's say minus 1, 1. And this is my v, so it's 0 here and 1 here. So I have a function which jumps. V prime, in the sens of distribution is a delta, direct delta, the function jumps. So v prime is the delta at 0, and the integral of v prime is 1, is the mass of the delta, which is 1. OK, now what is the h1 half norm? So the h1 half norm, it means that you have to do the integral from minus 1 to 1, of v of x minus v of y square x minus y. We are in dimension 1, so square. Now v, this function is either 1 or is 0. So if x is here and y is here, so in this interval, you see this is 1, this is 1, and then you get 0. If x and y are both here, you get 0 again. So the only interesting case is when, let's say, x is here and y is here, because then you get non-zero here. So this is essentially up to a factor 2. This is the integral where you take y in minus 1 from 0, and then you take x, so dy, and then you take x from 0 to 1. Then you put a 2 because you have to count it twice. And then the numerator is always 1, and the denominator is x minus y to the power 2. So now you have a point y here. If you integrate 1 from 0 to 1 in the x, this integral, this is something like 1 over y. The primitive is 1 over y. Evaluated at 0 and not 1. 1 gives you above the term. It doesn't matter. The problem is the singular. So this is a term which is singular with x and y are very close. So the problem is the term when you evaluate at 0. So it's something like this. But then you see you have the integral. So this is something like the integral from minus 1 to 0 in dy, or 1 over most of y, and this is plus infinity. So it's not true that if this is bounded, this is not bounded. But then you say, OK, but this function is not Lipschitz. Now let's try to make it Lipschitz. So then the second case is this up to 0. Then you take epsilon, you go. So this is 1. And you make a function which transition, let's say in our length epsilon. Now this is my v, new v. OK, and then you say let's repeat the computation. And let me raise maybe this, not so important. OK, then you do the cases. So you have x here, y here, x here, y here, x here, y here, x here, y here. So let's say I have the integral, for instance, from all the pieces, there are many pieces, but essentially these are of this form. Either x, so x is here, so this is epsilon 1. So if integral from epsilon to 1 dx, and the integral from minus 1 to 0 dy, then I have integral from 0 to epsilon dx, so x is here, and y is still negative, minus 1 to 0 dy. Then I may have both x and y in this interval. So integral from 0 to epsilon dx, integral from 0 to epsilon dy. And then you can have maybe dx here and y here, but they are also integral from 0 to dx, integral from epsilon to 1 dy. And then you exchange all the variables. OK, let's try to understand the argument now. So the integrand we have to put here is V of x minus V of y square divided by x minus y square. And now we have to look what is the numerator depending where the pieces are. So in the first case, x is here, y is here, the numerator is 1. This is 1, and this is 0. 1 square is 1. So we get 1 over x minus y square. OK. Now the integral of this is 1 over x, as before, the first integral, that's the primitive, evaluated at 0 and minus 1. Again, minus 1 is bounded. And then you integrate 1 over x. So this is like integral from epsilon to 1 of dx over x, which is log epsilon. What about the second one? X is here, and y is here. So y is 0, and at x, so at the numerator I have x minus y square, and the numerator I have 0 here, and here I have, the function here is x over epsilon. It's a function which grows with slope epsilon. So that's x over epsilon. So I get x over epsilon square. And now what you get? You get the integral from 0 to epsilon. So x over epsilon goes out. I have, again, this integral, which give you 1 over x. So you get x over epsilon square times 1 over x from the integral of this. This give you 1 over x. OK. X over epsilon over x gives you x. So this is 1 over epsilon square, integral from 0 to epsilon of x. The primitive is epsilon square. Epsilon square, epsilon square, 1 is bounded. What about here? So here, you are both inside. So you get x over epsilon minus y over epsilon. In this interval, divided by x minus epsilon y square. In this case, the 1 over epsilon square pops out. So this is 1 over epsilon square. Then you have the integral from 0 to epsilon, the x, integral from 0 to epsilon dy. And now you see that x minus y square divided by x minus y square is 1. So you just have 1. And so now you have 1 over epsilon square. Epsilon, epsilon is 1. Again, plus 1, what is 0 epsilon. And that is another 1. You can check it. In the end, what we got, the singular term is this one, is log epsilon. And epsilon is exactly the ellipsis constant of v. So this function is ellipsis constant bounded by v. So this is coherent with this. So this example shows that such an inequality is not observed. And then you, well, proving it, you need to work. It was kind of, OK. I must say it was a luckiness that this, I needed a similar inequality, like three years ago for something completely different. Not this one, but similar. Another interpolation inequality between the h1 of norm of a function and some kind of bivy norm. But it was different. In that case, it was how the h1 of norm changes when you take a function like this, any function, and you apply the heat semi group. And you categorize with the heat, and you look how the h1 of norm grows. So there you do it, and you get some log. So then I say, OK. Of course, in that case, in fact, we used that lemma, combined with another lemma, we put it together, and we get that. But OK. I mean, you believe that the result is true. Then the point is to prove the proof. We do it essentially in Fourier. Most of it. OK. So this is a lemma. I will not prove it. I just wanted to convince you that the result is reasonable. Now let's see what I want to convince you of this first step. We do it in the following way. OK. Remark. So we know that u is bounded. And the Laplacian one-half of u is equal to f of u. In rd, let's say. We are in rd. Doesn't matter what you are. So here you have to use, first of all, some elliptic regularity. You say, OK, u is bounded, implies that f of u is bounded. So you look at this, and you say, OK, but if this is bounded, the fractional Laplacian is bounded. But the fractional Laplacian, you see, the Laplacian are two derivatives. And then you take one-half of them. So two derivatives divided by two is one derivative. So here you are saying, essentially, that one derivative of u is bounded, which means almost elliptics. Almost, because elliptic regularity always miss some case endpoints. So this doesn't imply elliptics, but it implies that u is c0 beta for every beta less than 1. So you are older continuous, you don't gain one full derivative. But then you go and you say, OK, but now u is c0 beta, f is c0 alpha. So f composed with u now is c0 alpha times beta. If you compose another function, another function is still older with the product of the spoon. But this is, again, minus Laplacian, one-half of u. And you say, OK, this is now c0 alpha beta. So now, again, one derivative with respect to this. And now I get that u is c1 alpha beta, which now is elliptics, because u is c1, in particular, it's bounded grade. So in particular, the gradient of u is an infinity, along to an infinity, in the wall space. Because I have the equation in the wall space, I can apply this in every ball. So just by using twice the equation, I got now that I have a universal a priori estimate, my function is elliptics globally. So this implies, let's say, that the gradient of u is bounded by a constant c. OK, so now I need to start to use stability. If I have time, I will try to discuss better the stability, how it works. But since it's a bit technical, I will first try, if I can conclude all the steps. And I will give you, for granted, some technical computation. So when you have a stability, so this is a general principle, you know that your function is a solution, you know this is stable. Stable means when you do the second variation, you get an inequality. And now everything, all the proofs that you have stability, now you have to be lucky. You have to find the right test function. So you have to decide, OK, I know that differentiating along some direction I get an inequality, which function I have to take, along which direction I have to differentiate. You have to try and guess it. I mean, there is no recipe. I must be honest with you. I mean, there is no universal way. In our case, again motivated by some results on fractional mineral surfaces and previous results, for instance, by Enrico and Ovidio Savin. We just said, OK, let's try to see if we can get all the bounds we need by, essentially, you take your solution and you compare it with the translation of itself. Of course you cannot just translate everything because you are not allowed, I mean, it's like you take an hyperplane, you translate it, you don't see anything because you translate an hyperplane, you see an hyperplane. You are only allowed to do perturbation, which are compactly supported. So what you do is not translating everything, so you take your guy in a bowl of reduced air and what you see there, you translate it a bit and then you put a cut-off function to make sure that you keep the same boundary data. But inside, essentially, you take your guy and you start to translate and you see what happens. So, OK, we are going to start. So proof of step one. Now, let me call, let W be a stable solution in B2. I don't call it U because I'm not going to apply this to U. I mean, I'm going to apply later this, but not to U, to a rescaling of U. So I take, let's you be a stable solution of my equation in B2. I only need it in a bowl. OK, now stable means, so I know this means that if I take for any variation V epsilon, such that V0 equals VW, so you take one parameter variation with respect to epsilon, you know that V squared over V epsilon squared epsilon equals zero of the energy W epsilon is no negative. So, OK, where we said E1, remember of V is the H1 of norm plus the integral of F of V, where F is the primitive of little f. So, what variation we take, we consider W epsilon to be W composed with this psi epsilon V inverse, where psi, I define for you psi epsilon V of a function, so I need to define you this function. So, you take a point X and you do X plus epsilon V times a cutoff function. So, V times phi of X where phi is a cutoff. So, phi, let's say, is a function like this. So, this is B2, this is B1, this is my phi, so phi equal one in B1, phi equal zero outside B2. So, you have this function, this function is zero outside B2, so I'm not doing anything outside B2. And inside B1, this is one, so I'm moving the point X by epsilon V, so I'm moving the point. So, V here, oh sorry, V is a bit notation, sorry, thank you. Let's call it sigma. Sorry, sigma here is an element because in the paper you use both, we use both V, it's not a function. I cannot do it both. In the direction is SD minus one, so you move it in some direction, you do this, and you plug these in the formula and you compute the stability. Out of the stability, you have to work a bit, but this is the inequality you get. So, out of these, you work, you get the following inequality. Integral over B1, of grad W is bounded by constant one plus the norm of W in h1, half of B1, and that's it. Yeah, this is a computation. You plug in, you work few pages, and that's exactly what you get. But now, you see here, so in fact, now that you see this, you want to do an interpolation because here I have two quantities that are very much similar to each other, so if I could relate them, it would be nice. But in fact, that interpolation is fantastic because not only allows me to remove h1, half and put a gradient, but notice h1, half as a square, and the gradient doesn't. So if I apply here, so now this is from stability. Now I say, apply constant, and then this square root I can replace it by so the one doesn't play much role, but I get one plus the square root, because here is power is one and there is two, log of the ellipsis norm, so let's call it m times the integral over B2 of grad W. I throw away the first term because, I mean, you can absorb it easily. I just use the inequality in this simpler form just to avoid having too many terms. There is an extra constant, doesn't matter. So just doing this way. I use this inequality, so this is less than this, and then I remove. That thing should keep an extra term, but that's easy, just I don't want. So I gonna cheat a bit sometimes just to simplify the expressions, but here you use interpolation. Now you see the beauty because I control this quantity with the square root itself. So now you just use the usual trick when you have a product, A times B is bounded by delta A plus one over delta B. For every constant A and B the square root of AB is bounded by A times delta plus one over delta times B. For every delta. So here I apply this, I get constant square root, no, the square root appear, I get log m and this I apply with delta denominator plus constant delta inter over B2 of grad W. Sorry, m. Yeah, I said it, but I didn't write it. m equal to anything norm of the grad of W is that constant. So this is for every stable solution in B2. I have this inequality. Now I say, okay, here you see a priori, so what you would like to say is that, okay, take delta so that this constant is less than one half and then you are saying, okay, a quantity is controlled by something plus one half of itself and then you say fantastic because then I can absorb it. The only problem is that here you have B1 half and here I have B2, it's not the same ball. I have a larger ball. So you have to work a bit to absorb it, but okay, that's not the crucial. So there you need a covering argument. So you don't, it's not enough to apply this inequality in one ball, you have to apply it in many balls and cover around. So it's a technique which is a bit standard in some sense, once you know that a quantity is controlled by a small fraction of itself, even if it's a larger ball you can do it. So by covering by covering argument what you get is that the interval B1 half in the end of grad W is controlled by constant log M where M is the ellipsis constant. So grad log of grad W. Then in 3. And now what you do I have to choose to what apply. So just apply these to the function W of X equal U Rx. So you take your solution and you stretch it and you apply this inequality. What do you get? You get the integral over B1 half of grad W. So grad W is when you take the gradient of W this is the gradient of U Rx. So you get the gradient of U Rx. Then by change rule the gradient you get an R in front dx. By a constant and then the log of del n3 norm of the norm of W. But what is the gradient of W? This is R, so grad W at X grad W X is R grad U at Rx. And now remember U is bounded as bounded gradient. This term this is bounded. So these are constant and have an R. And now you just do a change of variable you call this Y. And you do change of variable this becomes the integral of grad U over B R over 2 dy. And you get 1 over R to the M-1 in front. D is the dimension. Because you get R to the D from the Jacobian and there was an extra R here to constant and you get minus 1. So this is exactly the first statement and said the integral of the gradient of W in a ball of radius R or R over 2 goes like R to the D minus 1 plus a log times log. And once you have this for the h1 alpha you just plug it in here. Once you have the bound on this you just repeat this. So you know this and then you can just take this inequality and combine it with this inequality to get a bound. So this implies by interpolation also that the norm of W in h1 alpha of B1 square or maybe B1 fourth. If here you have B1 here you get B2 so since you have B1 alpha so here B1 alpha B1 fourth is bounded by constant log of grad W and then I have another log so square a log there and another log from the lemma. And then you just apply that and you get the other bound. So this is step one. Now how do we improve this? To improve step one we need maybe let me make a remark remark this is suboptimal something I already said by a log so imagine that your solution is 1D so if I knew that U is 1D then I have a function that does something like this it's equal to minus 1 here let's say very close then there is a fixed region where it transition and then here it goes to 1 so this is my U 1D solution goes from minus 1 then in some fixed amount of time it goes to almost 1 ok so if you have a function like this which is essentially constant here and then it transition here and then you take a huge ball of radius R what is the how much gradient this is? U here there is no gradient in this part because U is close to 1 here there is no gradient all the gradient is in this band so here is a slab in Rd so this is like one cross the other directions so this region is where the gradient lives is bounded so you see the integral over Br of grad U is essentially the integral in this slab of grad U is bounded constant and the measure of these so you have a these are two parallel airplanes at distance 1 you intersect with a ball the measure of these is R to the D minus 1 you are intersecting a ball with the set contained to two parallel airplanes so for 1D solution you want this at the moment we have an extra lock so it's not optimal what we do now is that we do oh I shouldn't erase this step 2 we choose a second test function W epsilon here in the variation and W epsilon here is something like this so it's again W epsilon of X is W compose so I can do it with you U epsilon no no no I will do it with your scale so you assume that W is a stable solution we are going to apply this as before to U over X we think W is U over X later so you have W you do the variation W compose with psi what was it epsilon is 1 of X and what is psi epsilon sigma of X this is X plus epsilon sigma psi tilde if you want of X no psi tilde of X before it was phi so phi before was a function which was 1 in B1 and 0 at side B2 now psi tilde we take it like this so from R to R over 2 in this time here is R here is R over 2 so phi tilde equal 1 here and then here phi tilde is 2 minus 2 minus 2 log most of X over log R so this is what is called the logaritmic cutoff so to go from R to R over 2 we don't go it linearly we go using a log and this is sometimes useful because it makes you gain a log so instead of using the obvious way of going from so usually the cutoff function from R over 2 to R you just say okay let me draw a straight line I just cut it directly like this, tak instead if you use a log like this you gain an extra log sometimes in some estimates you get a logaritmic cutoff so you do a cutoff using the log it works slightly better so you plug in here and now you get this inequality so you have to compute again what the stability tells you you are going to get formulas of this type so before the stability told me that the gradient in some ball was controlled by the gradient in some other ball we had this relation between the inter of the gradient here and using an interpolation I tell you what we get we get something a different equality so by stability plus interpolation interpolation I mean that lem again we get the norm of W in h1 half of b1 fourth so in a ball square divided sorry just one thing I noted there the scaling no, I didn't know it yeah so for every r I have the following information that this divided by log r so r is the scaling I use comes from the cutoff I can choose r as I want is bounded by constant 1 plus square root 1 over k sum j from 1 to k norm of W h1 half of b2 to the k plus j this is a bit scary divided by k 2 to the k plus j yeah where what is k so here I choose r is 2 to the k in this is inequality so for every this k take r to the k then you can prove such an inequality you look at it and say wow ok what are you doing and this is true in every dimension ok now I apply this inequality so set again W of x equal U over x and then you plug this inside and you look what does it tell you on U and it's only for dimension 3 that you get a meaningful relation so this inequality is true in every dimension but when dimension in 3 so you apply the inequality and then things scale so you get power of the dimension plopping out, it's like here when you take an inequality and you rescale you have r to the d minus 1 so there's gonna be power of the dimension and if dimension is 3 things go well if dimension is higher you get powers that you don't control but if dimension is 3 you get this d equal 3 set the following formula aj is the norm of U in h1 half square in b the ball of radius 2 to j divided by what so I look at the ball in a dyadic ball so in a very large ball this grows like r to the d minus so I want to prove that this quantity grows like r square we are in dimension 3 times log, not log square I say that I have to gain a log so I need to say that this divided by the radius square so 2 to the 2j square times the log of this which is 2j is bounded so this is the quantity that I want to prove to be bounded energy in a ball this is r, r square, log r I want to prove the optimal growth so this is the quantity I want to control and now I check what happens, that's what I want what does it tells me there this is the equal you get for every j aj is bounded by constant 1 plus square root I mean you play a bit with that quantity first but it's not sum j, let's say from 1 to 2k aj k plus l so that's the inequality I mean this is just algebraic manipulations it's not super obvious because you have to see the scaling inside it's only for dimension 3 that you get such a nice relation so it's only for 3 where no other power appears you only get a appearing everywhere and there is no thing now you say what do I do with that so this is my only information dilemma recall that the dilemma tells me that the norm of u in each one-half of a ball of radius r is bounded by constant r square log square r so this means in terms of j log square which means that in terms of j aj is bounded by constant j so here I have r square log so I have an extra log there which tells me that aj grows linearly my goal is to prove that aj is bounded now you look at this and say I don't know how to do it but let me simplify so I claim that out of this so if you get a function so if you have a sequence of numbers aj which satisfies this inequality more linearly then they are bounded there is no way they must be bounded and to understand this let's do a simple case let's remove the one in this inequality suppose that I have an inequality without a one it's just there but it's not important and let's assume that here instead of having a sum I can put so you see this is an average so what is telling me this inequality is telling me that the energy is controlled by the square root of an average so here I have a sum of 2k terms but divided by 1 over 2k so this is a mean let's assume the simple case aj sorry this is a k doesn't make sense for every k so simple case ak bounded by constant I forget the one instead of having an average let's say that I only have one of these terms let's say the largest one or let's say l equal from 1 to 2k the largest one is l equal to k k plus 2k 3k a of 3k it's one of the term appearing in this sum here I have an average and then you have a square root you have an inequality like this now I claim that this function must be bounded sublinear if it's sublinear then it's bounded why? assume that you have a sequence of points with that then you say okay I assume that let's say there exists a j so fix m fix a very large m to be chosen and assume that there exists a certain j such that aj is larger than m I want to prove that there is a contradiction if m is large enough this will prove for me that then aj less than for every j so assume that there exists such a j then what I have is that m over constant so let's call this constant c0 just to remember that it is this m over c0 is bounded by ak sorry aj over c0 and this is bounded by the root of a3j this is what is true for every k so assume that there is a somewhere where I go so I discover so if I have this inequality you can square this you discover that a of 3j is larger than m2 over c2 and if so if m is large enough d, so this is c0 this is much larger than m because we have m2 so it has grown now let's see the applied then I say ok then I apply this inequality with 3 so I get m2 over c0 square 1 over c0 is bounded by a3j over c0 so I just this inequality but then I apply that inequality this is bounded by square root of a of 3 of 3, not 9 j so when you take the square you get a of 9j is greater than the square so you get m to the 4 divided by c0 to the 6 then you keep doing that you keep you get a to the 3 to the m j is larger and you have to look what you get you get a m let's see this is gonna be the linear one so you have to get an m to a power super linear so you have said you go ok let me look at the paper because otherwise I won't remember I mean this is the simple case so in the paper we get probably slightly worse but it grows essentially 2 to the m divided by c more or less 2 to the m plus 1 doesn't matter but m is much larger than c so you can make this so m over c is large so this is essentially greater than a constant I don't know theta greater than 1 to the 2 to the power m where theta is greater than 1 so this term grows like theta greater than 1 to the 2 to the m so it's super exponential in m while a of 3 to the mj by this growth is bounded by constant 3 to the mj so you see that for mga large there is a contradiction I have the log information told me that the most function grows linearly in j is exponential growth but iterating the inequality I get a super exponential so this is the contradiction so absurd for m large enough so the contradiction was if I can find a point at j for which m larger than a critical m m has to be large enough to kill this c0 around the constant then you start to grow super exponential because of this inequality this large means this much large then you apply this with 3k it's much larger and then it's growing super fast absurd, it must be bounded so awesome, we gained the extra log, we don't have it so now the corollar is that this is bounded for equivalent tree we gained the extra log in dimension 3 and now I close just tell you how you close is another test function a third test function if you plug a third test function here, I just tell you you can check on the paper which is essentially, instead of a log you do a log-log cutoff something that goes super slowly you plug inside and you get this information instead of these estimates for you you get the integral over b1a of the derivative of u sorry, the partial derivative of u positive part times the integral over b1a partial derivative of u negative part is bounded by constant over log-log r for every r and then you get r goes to infinity you get 0 so what you discover by stability with another cutoff in every ball you take a direct sorry, not v sigma in every ball you fix a direction and either the partial derivative in one direction or the partial derivative in the other direction must be positive in every ball either the positive part or the negative part are always 0 and the only way a function can do that is if it is 1D because in every ball it means that you cannot change sign you cannot have a bit of positive in one direction so you see that that is compatible only if the level sets are one-dimensional and this is true in every ball b1a is every ball so once you have this in b1a this is 0 you move the ball around this is true in the ball space and a function for which in every point either either all this is 0 means it is 1D and once you have that ok, I think that was enough thanks for your attention