 The second day of the conference, and we'll begin with the second lecture of Professor Yanyan Li. Thanks. So if a function satisfies this equation, degenerate elliptic equation, so in a punctured plane, punctured space, then you should be radially symmetric. So actually, u can be continuous, satisfying the equation in viscosity sense, but let's prove it for c2. So let's recall this definition. So this gamma is something like in Rn. It's a cone, a symmetric cone, and on one side of the half plane, half space on this half. So this is gamma one on this side. So if lambda of Au, Au is the conformally invariant operator. So it's on this surface, then it should be radially symmetric. So if this gamma is this gamma one, if this gamma is this gamma one, so namely the eigenvalue on this boundary of this hyper plane, then the equation is actually Laplacian u equal to dL, so harmonic function. So this is a kind of nonlinear version of that. So last time we described a procedure to prove this result, and we only need to prove the following proposition. So it's a comparison principle. So if this, well one can, yeah this is a sub solution of the equation. So we can take it on the boundary of gamma, will also be a sub solution. So that's a solution. So this is a sub solution. This is a super solution. So it's on the other side of the cone. So sub solution and super solution with finite singular points. And if we have a v, the singular one is greater than the regular one on the boundary, then we'll have an order. So the singular one should be greater or equal than u. So this is a comparison principle. So we'll prove this result, and let's prove it for an easier case. The proof will be similar for more singularities. So let's say we have one singularity. Let's say we have a bounded open set C2, and it's a sub solution, and there's a v, which is C2, but having a singularity, possibly having a singularity, and v greater than u on the boundary imply v greater than u in the domain. So we'll prove this. So we just take one singularity. And last time we have proved this comparison principle if there's no singularity. So there's a fact that if something is in gamma, that would imply this is super harmonic. This is just because of the structure. When we take trace, we will see a positive multiple of minus Laplacian u. So if on this side, it's always a super harmonic function. And to prove this, we will use two elementary facts for harmonic functions. So we are going to, because what we are, our singular solution is always super harmonic. So we can use, so one is if we have a super harmonic function away from a point, now negative, then either w is identically zero, or at the singular point, it should be positive. It's just saying that we don't see this singularity. So this is a classical result because a point, is of capacity zero. So capacity zero sets are the sets which have this property. You don't see the singularity. So the second lemma says that if we have a super harmonic function, I forgot to write super harmonic. So I should add Laplacian v less or equal to zero. So it's a super harmonic function. So it's a super harmonic function. Well, of course in one dimension, this is not true. I can write a super harmonic function, but I can certainly take a w one like this. I can do a w two like that. And the gradient is not the same, but that's because it's in one D. So here, because this harmonic capacity is zero, so it's actually super harmonic function across. So one cannot have, if we have w one, w two like that, ones have to have the same gradient. So these are two simple facts, and I forgot to write super harmonicity in the second lemma. So let's, now we can prove this result by reading, yeah, we prove this, we prove this result comparison principle. So suppose not. So then this v will be less than u somewhere. So if we draw the arrow here, one may have a u like this, and the v might be something like that. This is v. So somewhere it's below u. And by lemma 2.1, this if u v should be positive. So v will be bounded from below. So therefore we can replace, we just multiply v by a bigger than one column. So constant we may assume. So because multiplying this v by a constant, it doesn't change the equation. Equations stay the same. Now we have a u like this, and v will be this boundary and maybe touch somewhere like that. So the arrow could be here, or maybe touch at zero. So actually from here, we know that this limb in has to be touching the arrow. One cannot have a picture like this. Because if one have such a picture, then so this picture will not occur. One have to touch at there. So because otherwise, so this has to be true if we have an inequality like that in here, then for small ball, this v will be bigger than u. So we are going to take a little ball. So it's going to be bigger than you. Then one can just apply comparison principle on bigger minus this little ball. So now u and v have no singularity. So reduced to the case we already proved that will tell v bigger than u outside. So these two will violate that. So we know that whenever it's like this, you touch boundary is bigger, v is bigger than u, then it has to be touching actually at a singular point. It might touch elsewhere, but at least it should touch the singular point. So then what we can do for this picture, so now we know that this v has to be touching at zero, somehow touching here. This is the, and u is on the boundary, there is a little distance, yeah, positive, positive. So therefore for epsilon small, for small epsilon fixed, we look at all x less than epsilon. And then we look at the function by translation a little bit. This function we translate a little bit. So when we translate a little bit, so on the boundary because there's a gap, so they are far away always. And here it may cut through this, it may cut through this picture or it may be detached. But then I can always find a positive constant multiplied by that so that I adjust it. This function will satisfy the if is equal to zero. So maybe it's higher, maybe it's lower, I adjust it a little bit. So this wx is still a solution. So our equation is translation invariant, multiplying constant, it's also invariant. And we also know this v is bigger than wx on the boundary. So it's exactly like what we argued before. So therefore we know that we must have wx zero for every x, it should be equal to the if. Let's call this number alpha, just like here. So this is true for every x. And therefore by lemma 2.2, so all these functions is touching this v from below at a singular point. So at a point, so the gradient should be the same. So we know that gradient wx at zero should be equal to the same v. This v is independent of this x. Well, now this is clearly equal to lambda of x, w of u of x by definition. This is equal to that, because this is just a translation. If we put zero, this is the value. And here we take the derivative. So this will be equal to lambda of x, gradient u of x. And we replace this lambda through here. We are going to see this is alpha divided by u. So it's gradient log u. That means for every x we have this, that means log u of x is going to be equal to this vector dot x plus a constant. So for all x in b epsilon. Then this is exponential. So if we calculate Laplacian u here, it's going to be equal to e to the b. So it's greater than zero. So therefore this u is actually super harmonic. But v is sub harmonic. So we know that v minus u will be less or equal to zero and v minus u greater or equal to zero, also in this form. But we know that they touch at a point. So this means they have to be the same. So that means v has to be equal to u. So v is smooth actually. So it goes back to the situation we already proved. So in the smooth case, we know that comparison principle holds. So therefore we have proved this symmetry result. So consequence of that is the Liouville theorem. So any solution in the entire space will be a constant. So next I will prove another analysis result. So this is kind of symmetry result, Liouville theorem. Another result here in such studies is a gradient estimate. So in this generality actually is proved quite early. So if we have a solution, this is a constant. So it's upper bounded, then it will imply gradient log u will be less than a constant depending on b. If a solution is upper bounded by a constant and then it's gradient log is bounded. So that implies the Harnock. So actually for this estimate, the main subtle part is on this side is not assuming it's bigger than a positive constant. So the proof we'll present here will make use of the Liouville theorem we just proved. So we recall what all this gamma is the cone, the operators is here. So it's a conformally invariant operator. So we'll prove this by first proving the part, which is easier to handle. So is to assume actually we have a lower bound also. So then the conclusion is to prove the gradient is less than a constant which depend on a also. So I'll prove this. So then the second step is try to prove without this a dependence. And for that, we'll need to use the Liouville theorem we just proved. Is a c of b in the final S, okay. Yes, it also depend on f and gamma. Okay, okay. I am mainly. But you stress the dependency on the L infinity. Yeah, thanks. Yeah, the upper bound. So here we impose a lower bound now. So for this, so the method is, this method is more standard. It's a Bernstein type argument. So I will not show all the computations but just show the schemes. So the scheme can be applied and has been applied to rather broad class of equations. So we write log u as v. So then if we look at v, then it's a equation. It's a fully nonlinear elliptic equation. So it's not uniformly elliptic. It's elliptic, but not uniformly elliptic. So, and it's in this particular case, this nonlinearity with respect to the Hessian of v is through the eigenvalues of this. In general, the method can be applied. It doesn't have to apply to equations of this particular dependence through eigenvalues. So it's a nonlinear elliptic equations. So you have Hessian of v, involve first derivative and involve v. So you have a nonlinear elliptic equation. So we want to prove gradient has a bound because the bound on u will translate to a bound on v. So the v will be lower bounded. I didn't write the v bound, so v is here. So v will be having both sides of bounds. So for this type, Bernstein type arguments, so it's easier to keep track of things if we have both bounds. If we only have one side bound, it becomes rather tricky. So we have to keep track of things very, very carefully and more. So that's why we impose one side bound. So we have a full C0 estimates. So C0 on v is bounded. Then we want to estimate graded. So this, for the success of using this method, it depends on the structure of the equation. So some, yeah. And so we want to get local estimates and our equation is very good and actually it has local estimates. So for some other equations, you would not have local estimates like Monchampere equation in high in three dimension up. So, and now let's take a cutout function because we want to get local estimates, we take a cutout function. Inside b1, it's one, outside b2 is zero and it's nicely in between. So we look at a function, rho is cutoff and we take graded v square and we multiply by something depending on v. So this phi, we leave it flexible and we look at the computation to choose later on what kind of phi it can work. Well, it turns out this is the phi we are going to use. So we are going to use a phi satisfying first inequality and the second inequality on a closed interval. So then it's very easy to pick up such a phi. So on the other hand, if we insist to have a half ray and such phi would not exist. Otherwise this method will just give the whole thing. So we want to prove that this G has an upper bound. So this C should be a constant depending on our alpha and beta and of course depending on the phi you choose but that doesn't depend on v. So we want to prove this. So, well then this because G is zero outside b2 so there is an interior maximum point. So the maximum point you fix at pick up maximum point you try to prove the value at the maximum point is universally bounded. So, well if G is have an interior maximum point the gradient should be zero. So this means partial derivative in Xi. So should all be zero. So this will give us an expression involving phi always through v of x zero involving phi, phi prime, gradient v and Hessian v. So because our expression involve up to gradient v. So take one derivative we will get an expression here. So we have n equations. So we freeze at x zero. So then we take the Hessian. The Hessian should be less or equal than zero because it's a maximum point and we evaluate at x zero. So we are going to see an expression involving this. So now up to third order derivative. So the equation is a second order equation. We hit a derivative we are going to see this. It should be right hand side is zero. We are going to have an equation involving data up to third derivative of v at x zero. So then we forget the equation. We just use this information here. So now because of ellipticity, f lambda i is positive. We hit this one. It's a negative definite matrix. So we make a summation. We should have less or equal than zero. So now at x zero we have one information is one and two and three. So in this two and three, one can easily replace the third derivative of v by derivative up to second order. And we also use this equation to replace to insert to this inequality. So we are going to have an inequality evaluated at x zero and that's an expression it's rather long involving data up to this, up to secondary like this. But once you write it down it's quite clear which one is what you want and so on. Anyway, so then I hide half page computation. So then you will end up with something like this. So this is what you will end up. So you are going to see an expression like this inequality. So now this certainly give a bound on row time step, namely give a bound on g. So this gives the upper bound for a gradient estimate. I seem to have run way faster than I thought. That seems to be an interesting thing. Now it reminded me something happened many, many years ago when I was in Taiwan. Luis was there maybe. It's for birthday of Louis Nuremberg. So I gave my talk and I finished half way and I look at the audience I said, what should I do? I remember Varuda was sitting there. He said, well you could start all over again. Maybe we should thanks. Oh. No, you'll continue. I could add something on the spot. Very good, very good. They will thank you again. Okay, so it's really a surprise. I thought it's so long. So I put everything on the slides and hide all the computations. So maybe I'll describe briefly what I will describe next time. Next time I will have formula. So now we have proved that. So now the next step to prove these estimates is to prove a holder estimate. So step two, step two we want to prove that. So now our solution, our solution, our solution is bigger than zero and less than B let's say in B three. So we would like to prove instead of a gradient estimate, we are going to prove first a holder estimate. We will prove a holder estimate. So once proving this holder estimate we will be, so let's assume this step. So suppose we completed a holder estimate. Then we will prove from here to get the gradient estimates just by using step one. So this is a easier argument. So now, so if under this, so what we can do is because this holder estimate, so holder estimate is still giving us Hanak. So this is going to be less than constant. Take holder and X minus Y to the alpha for X, Y is in B, B1 for example. So this is still Hanak. This is still less than C. We have Hanak. So this is log U of X minus U of Y. So this Hanak, UX and UY they are comparable. We can reverse the role of this Hanak. So holder estimates already give Hanak. So then for the gradient estimates, we can look at U of X over U of zero. Let's look at this function. So this function, so because U of zero, so this is greater or equal than U of X. So yeah, so then let's look at the equation. So let's assume this has minus one homogeneity. So this is homogenous of degree one. Then if we insert this expression in because AU has homogeneity, certain homogeneity. So we are going to see this will be F of lambda of AU and divided by U of zero four over M minus two. Plus power actually. This will be, so you are here here as a U of X. U zero to a positive power. So let's take it homogenous. It's F equal to one, yeah, let's look at homogenous. Let's look at homogenous. So this is equal to one and this is equal to a constant. And this constant has an upper bound now. So even though our earlier estimates we prove all these results for right hand side equal to one, you replace it by a constant less than one. It's the same proof. It doesn't depend when it becomes small all the estimates remain the same, the same proof. So therefore we have an upper bound. It's a upper, it's constant upper bound. So our step one will then gives gradient log U hat gradient log U hat is bounded by C. So now because this is upper and lower bounded by a constant now. So this will be bounded by R from B. And this is the same as gradient log U. So the gradient estimates is proved by first proof there is a upper and lower bound. And then if only upper bound we prove a holder estimates. So now the main thing is to prove the holder estimates. Now proof of step two. So then this is B one. So we look at this semi holder norm. If we have a sequence goes to infinity. So if we have a sequence of solutions the semi holder norm goes to infinity. So this is like log U i minus log U i y. So then there's a sequence it goes to infinity so well of course it may go to the boundary. This is maximum points you can do a selection. So let's assume that the maximum of this. Well so one can not just take the it's not like if we try to normalize this. So I can take a maximum point. I can go to the maximum point iris scale. So well this is a holder norm. So first one sort of have a little this kind of measure this holder largeness by a small piece. You take a proper precise small piece largeness and you can assume this large means somehow go to zero. So the largest that thing it's like if we are not doing holder we are doing gradient then we will say assume the maximum of gradient occurs at the origin. This corresponds to a selection of this maximum point. So we do that. So that step will be the same like normalizing the gradient but here there's a little difference is so here I can say I assume the gradient happens at the origin. I go to the origin I sit there I rescale center there and I will get an entire solution after the limit. So here there is a, here there's a kind of hold. So one can't do a small piece. One can do a measure catch this largeness by a small piece large in holder sense and you take the maximum like that small piece and then that sort of centered at zero go to zero. So then you rescale that because it goes to there that's a maximum you rescale that then that notion of holder largeness will become one just like normalizing the large gradient to become one by rescale the variable. So that will be go to one. So then so after the rescaling become that then it will end up with a solution U and that solution will be entire and that solution will end up with because we normalized so then one would have a log U somehow holder the largest is the norm at zero. So it's not quite hold this is equal to one. So we have that and this will go to that in any C gamma, gamma bigger than alpha log RM. Because this is the maximum at the center when you scale on every ball you have the maximum chain with the value in holder sense. So on any bounded domain the log of UI will have both upper and lower bound. So therefore the gradient estimates enters step one enters step one enters because we have maximum holder size at zero and everywhere any unit ball going out the maximum holder control is at most one. So when you go finite distance is bounded above and below. So then step one allows gradient estimates on any bounded place. And then the convergence is higher than alpha. So then this will satisfy equation lambda of a U will belongs to the boundary because this is when we normalize this way our equation will deteriorate to this equation. So now the Liouville theorem comes in. So now we know this solution. So we have this solution. So we know that U has to be equal to constant. And however it violates this holder sense at the origin is one because it's just like if you renormalize gradient this would be gradient of that function at zero is equal to one. So it's a contradiction to that. So then this will give this holder by using this degenerate Liouville to prove this. So I still have 10 minutes. So maybe I have to stop. So I didn't estimate the time properly. So let's thanks Yan Yan for this second talk.