 So, good morning everybody. Today we continue with Enrico Valdinoci. He will today give the third lecture of non-local equations and the different perspectives. So, thank you Enrico. We will try to finish the... Can you hear me? Is it on? Today we will try to finish our discussion on the eight fundamental differences in laplacian and the fractional laplacian. Actually, let me mention that if you want to go into more advanced subjects, the topic that Ciavi introduced yesterday, the one of non-local minimal surfaces is probably the richest in presenting very important structural difficulties and differences with the classical case. So, since Juan Luis doesn't like eight, I just listed seven fundamental differences between classical minimal surfaces and the non-local minimal surfaces. If we have time, we will look into these differences more. But just to let you taste the flavor of the differences. So, I wrote seven. Some of them were mentioned by Ciavi yesterday. Uniform perimeter bounds, which are valid for the non-local case and not valid and sometimes not known for the classical case, sticking as properties at the boundary, the fact that non-local minimal graphs are discontinues at the boundary, the fact that the non-local catenoid grows linearly instead of logarithmic. The fact that there are unstable non-local minimal cones in one dimension less with respect to the classical case. The fact that the non-local mean curvature flows develops neck pinch types in uratities also in the plane. And the fact that non-local phase transitions are more rigid than the classical ones. This is actually probably the topic of Alestius lectures at the end of the period. So, this said, the last difference we were keeping discussing was the decay at infinity. So, we discussed the decay at infinity of the heat flow, which is like 1 over x to the n plus 2s of the fractional heat flow of the fractional Allen-Kahn equation, which is like in 1d 1 over x to the 2s. And now what I would like to mention briefly is the decay of ground state solutions for the Schrodinger equation, which is again just polynomial instead of exponential, and the fundamental solution of the fractional Schrodinger equation, this, and as I was mentioning, for large x, we have decays like 1 over x to the n plus 2s. And now some of you were asking me, okay, but why this happens? Well, the first answer is I don't know. But let me try to give you just, likely to be wrong, but convincing argument in a sense. So, what I think of this decays is that in all of these equations, the right hand side has to compete and balance with the left hand side. So, the question from my psychological point of view would like to decay as much as possible at infinity, but then looks at the left hand side and changes his mind. So, for instance, here, outside the origin, this is like zero, okay? So, in a sense, gamma will try to be as much as decaying as prescribed by the fractional laplation. So, if you start thinking that gamma is compactly supported or very fast decay, no matter what, the fractional laplation will decay like 1 over x to the n plus 2s. And so, it forces the decay to be this one instead of an exponential one, okay? So, if you are convinced that this is a good answer, this justifies the decay for shredding. Well, the decay for the heat flow, it's not convincing. So, let's see. Let's see if at least we get with this argument the right numbers. I'm not trying to sell this as a proof, but it should give somehow an idea because for the heat flow, the situation is like the following. So, you are solving the equation of this type, and now u times zero delta. So, you may think that at times zero you are very concentrated at the origin, and then after infinitesimal time, you develop a little bump. Now, you may hope that this little bump converges to zero very fast in x, but then you look at the equation and you see that the increment in time of u at infinity has to grow like 1 over x to the m plus 2 s, because even if your initial distribution, like you think you start from a delta, which is just something like this, then you may hope that after an infinitesimal time you are very much concentrated. Fine, but if this would happen, then what you would have is that the time derivative would grow like 1 over x to the m plus 2 s. So, after another infinitesimal time you would reach the level 1 over x to the m plus 2 s anyway. So, the equation pushes up u at infinity exactly for the amount prescribed by the fashion Laplacian. Now, if you are skeptical about my arguments, now you have a very strong point in your favor, because this is 1 over x to the 2 s, not 1 over x to the 1 plus 2 s, which would be the right decay at infinity. So, this tells you that my argument is not really rigorous, but it also tells you that if you fight a bit, you may make it a little bit more convincing, because what is the Lankan equation? The Lankan equation is in 1D, so in R, fashion Laplace equal to u minus u cube. So, I call v the derivative of u, and I take one derivative of the equation. This gives what? This gives v minus 3 u square v, which is like 1 minus 3 u square v, which at infinity, we know that u converges to 1 or minus 1, so this looks like more or less minus 2 v. Now you apply the argument of balancing with the right hand side 2 v, say for large x, if you apply this argument to v, you may expect that v now decays as much as is prescribed by the fashion Laplacian, so it decays like 1 over x to the 1 plus 2 s, which means that when you integrate back, you have that 1 minus u decays like 1 over x to the 2 s. OK, so this may not be a real proof, so let us now do a real proof of one of the things that I was saying was probably, me the most surprising difference with respect to the classical case that is the fact that all functions are locally s-harmonic up to a small error, so that whenever you have, you can prescribe locally any function and approximate it with functions with vanishing fashion Laplacian. So the result for any epsilon bigger than zero and for any nice u, there exists some u epsilon, such that u epsilon is, say, c2 or ck close and the fashion Laplacian of u vanishes in b1. So somehow you can extend your function u in a suitable way so that the integral oscillations perfectly coincide and the error that you make with your original function is as small as you like. Let me mention that this is a very general result, so at the beginning I thought, okay, it's already strange enough that it works for the fractional Laplacian and since I come from an elliptic perspective, I thought, well, this has something to do with ellipticity, but actually no. So the same result holds true basically whenever you put here a linear operator which has some non-local coordinates. So you can take your favorite linear operator that has two sets of coordinates, a set that I call the local coordinates in which the operator is just a differential operator of any order you like and with different orders in different coordinates and the non-local operator which is just the sum of fractional Laplacians in different coordinates, possibly of different order. And you can prove the result for this general monster operator. Proving the result for the monster operator in any dimension, it's a little bit tedious not because it's hard in itself, but because of course you have many variables and many indices and things are tricky. So just to let you see what's going on after all, I will make a simple proof in the case I take the operator just to be more general, not the fractional Laplacian, but an operator L which corresponds, for instance, exactly, exactly. So the first lecture we did this picture, maybe I repeated it. So let's see with colored shocks, it looks better. So you have your function u which is defined only in B1 and they drew somehow my worst enemy, parabola. Parabola doesn't look harmonic for whatever reason, but then you put an epsilon. Of course I draw an early heating arm but you can do it in smooth norms as well. And then what happens is that you put a u epsilon here which is S-harmonic exactly because outside this function has some oscillations then it goes to zero after a while but the oscillations that make it S-harmonic. Yes, I think that this is kind of a very nice subject of research and so concerning your type of questions and the proof I will make now I'm sure you are not gonna like it because it's too soft in a sense. It's a proof by contradiction. So the proof doesn't tell me that this is the right picture. Nevertheless there are some more advanced proofs using Hormander methods by Mikasalo, for instance, in which they can have a better control of the radius in which you have to go to zero or things like this. But somehow that is not a proof that you can do it in five minutes, I think. It's more advanced. This proof doesn't give you any information on the real picture though as I was saying the last time this is almost undistinguishable from the real picture I drew these things like I wish there is no reason for which this thing is a sermonic but then I can repeat the theorem on a larger scale so I can say okay the orange function may be not a sermonic in B2 but then I repeat the theorem in B2 so I can complete it upside to a sermonic function that looks like this in B2. So somehow it says that really any picture is good on the local case and the important part always comes from infinity which is, I mean to me sounds a bit not natural also for large s this doesn't seem to be right but it is right and it is what it is. So just a rough idea so the picture that I drew maybe not the correct picture so I have here my U and I drew here just something that looks reasonable to me but may not look reasonable to you but what I am saying is that I can repeat the theorem on a larger scale so suppose that this is now B2 I can now repeat the theorem with the orange function in B2 and say that well the orange function in B2 maybe is not the right thing that makes us sermonic in B1 but I can repeat the theorem in B2 and say okay there is a function which is almost undistinguishable from the orange function which makes the orange function S-harmonic in B2 and also in B1 so the picture I drew is essentially correct because I can change the data closer and closer to infinity and make any picture correct so there is no way to distinguish just from looking at the picture whether or not the function is S-harmonic because this is an information that dramatically comes from the data at infinity so my suggestion is to prove this result not for the easiest case of the fashion Laplacia but not for the hardest case of a monster operator so something in between my suggestion is to look at what happens for the heat kernel the fashion heat kernel good so if we do that so in a sense I am going to prove that all functions are locally S-caloric up to a small error S-caloric in the sense that it is a kernel of the fashion heat equation and so my function so first of all some reductions are the following first I can assume that U is a polynomial and the reason is somehow the storm bias trust theorem while there is a small refinement of storm bias trust usually in Italy they taught us storm bias has seen one variable and doing approximation in L-infinity but if one looks at the approximation with Gaussians basically the storm bias that holds in any dimension and for any ck norm so what I am saying is that any c2 function U can be approximated by a polynomial so I first approximate U with a polynomial and then a polynomial with U epsilon and so I can reduce to the case of the polynomial then in fact as the problem is linear I can reduce to the case in which U is a monomial say you know what a monomial is so just let me write it so U over xt say U of big x big x is comprising both the variable x and t is of the form x to m with the multi index notation and the reason for this is that the problem is linear so if I approximate any monomial then I sum them up and I approximate the polynomial then the third observation is that what it is important to do what is important to do is to produce the right derivatives at zero so the goal would be to find a function say v such that the derivatives at zero are prescribed say up to order m so the alpha v of zero are prescribed well this the bad notation because m here is a multi index so you have to put the the norm of the multi index here when I have a multi index the modulus for me is just the sum of the entries so let me call this j just not to carry the multi index notation so why is this so well because then you can just blow up your picture because if this is true you can take vr of x to be to the j v of x over r now if the derivative for instance suppose that in this way v is of the form p of x is of the form xm plus order of xj plus 1 so here you are fixing all the derivatives of v in the origin to be the ones I like to be the derivatives of u the derivatives of u are zero zero zero zero up to the derivative which is whatever is prescribed here so I say well also v as the same derivative is near zero up to the correct order this means that when I do this rescaling here I get exactly x to dm because the j is exactly the norm of m plus order r so basically when I do the difference between vr and u I get something small and so I prove the theorem so the real issue is just to understand what happens near the origin now this seems a great simplification but if you think in the classical case it's exactly what you cannot do because for instance if your Laplacian is zero it means that the second derivative the second derivative of your function have to lie in a co-dimension one space because for instance in dimension two the second derivative in the first direction have to balance the second derivative in the second direction so it's not that you can produce any number you want the sum of the two have to be zero so this is somehow the core of the proof and it's the part in which the non-local structure is richer than the local one and so I now prove three and as I was mentioning the proof is by contradiction so I look at a space v which contains all the functions h say from r to r such that l of h is equal to zero in a neighborhood of the origin well this is a non-empty space because the zero function is there and then I can call v zero in vizio I do this I take all the functions here and I compute their derivatives up to the right order so this collects an array so for any function here I compute all the derivatives up to the right order and I list the derivatives in this array so this is a finite dimensional array and say this is contained in a euclidean space and I call n plus one the dimension well n plus one is just the number of alphas for which alpha less than j holds and now my claim is that v zero is not only contained in this space but it's actually equal to this space that is really maximal span and you see if I prove this then I am done because I have found a function which is escaloric near the origin sorry I forgot to write it there exist a v such that l of v zero solution of the problem so if this is true in particular I take the array of the derivatives of u and I get this array through functions that are escaloric how I prove this well the proof is by contradiction well if not it means that this v zero which is a linear space because the sum of escaloric function v zero is actually a strictly linear subspace of r to the n plus one so a picture would be like this the blackboard is r to the n plus one and v zero has to be here or to be a linear subspace of this plane for instance so I call mu this direction and I say that of the fact that v zero spans everything is that v zero is contained in the in the space I don't know how to call it vb y in rn plus one such that new dot y is equal to zero now what do I do the idea is that I choose one h and I find a contradiction by taking the solution of the somehow of the the couple the couple diagram value problem so what I do is I look for a function phi which is the first diagram function of the fractional application in minus one one I didn't say but I'm just doing the one dimensional case so one dimensional space variable and one dimensional time variable and to make things much simpler let me actually do the case of the one half fractional application the computation would be the same with s but the exponents will be more annoying so let me take s equal to one half otherwise one has to adjust exponents and things are less transparent but if you take for instance one half ok with Dirichlet with Dirichlet datum that is I'm solving equal to lambda in minus one one and phi equal to zero now the important thing is that we know more or less how this function looks like this is something that we discussed in the first lecture we know that the boundary behavior of this function is not leap sheets up to the boundary is only held like a circumference and we computed if you remember what happens for the square laplation of one minus x square under the square root and that's exactly the same behavior so basically if I take a small delta what happens is that the phi of minus one plus delta goes like delta to the power one half basically phi looks like this near minus one and if you want in a distributional sense you can take the derivative of this expression and you can actually prove that delta positively so on this side it goes like delta to the one half and if I take k derivative of this guy it goes like one half minus k ok then this is the case I think I take h of x and t to be e to the minus tau t phi of minus one plus over lambda x I apologize with the Greeks I know that my lambda is wrong my lambda should be this one I think you understand my writing so I have to check that h belongs to v because later on I want to use h to find the contradiction so let's check that h belongs to v where for this I have to apply l to h so this guy is if I take the time derivative it's easy because I have the explanation so it's just minus tau and then I have e to the minus tau tau tau is fixed for me fixed tau I take derivatives in t I have minus tau and then h this is the time derivative and then I have to do the Laplacian to the one half of the phi well if I do that I get what I get plus e to the minus tau t and then by scaling the Laplacian to the one half is an operator of order one because the Laplacian is operator of order two and the one half is two times one half makes it one so I have to take the fractional Laplacian of all this function which is the fractional Laplacian of this function evaluated at this point but I have to take into account the scale factor and this is the only reason for which I took one half here if you do the proof with the different s you have more annoying exponent and that's why I did that ok now I remember who phi is phi is an function so instead of square root of the Laplacian of phi I can just write lambda phi so if you agree I erase the fractional Laplacian and I put the lambda but now I and this is zero because it's tau and then whatever else is remaining is h if I'm not mistaken with the signs ok fine now once you have that you can now use your use your h in your contradictory assumption written here so since h belongs to h it means that when I compute the derivatives in zero they have to lie in this hyperplane so k is that since h is in v the derivative are in v0 and v0 is in that hyperplane which means that if I compute all the derivatives h and evaluate them at zero I get zero so here I'm using the notation for which nu nu is a unit vector so it's in Sn and I write nu as nu alpha so somehow I'm writing the coordinates of nu with the same convention and I compute what is the alpha derivative now unfortunately I have to introduce a new convention I have to distinguish between time and space derivative so my multi index alpha will be thought as an alpha x alpha t so these are the x derivatives and these are the t derivatives so nu alpha is some nu alpha t alpha x alpha t and if you prefer the norm of alpha is just alpha x plus alpha t so these I just wrote these in coordinates then I have to take derivatives if I take alpha t derivatives of the exponential I get minus 2 to the power alpha t then I have to write the exponential but I will have to take t equal 0 after so the exponential of minus 2 t t equal 0 will get 1 then I have to take alpha x derivatives of phi and evaluate them as 0 when I take alpha x derivative I get tau over lambda to the power alpha x then I have alpha x derivative of phi made a mistake so let me introduce also an additional parameter delta the same computation would have worked before before it was just the translation here doesn't play any role but now I want to leave this delta because I want to have here 1 minus 1 plus delta if I don't put the delta here I go into a little trouble because phi is not differentiable at 0 so I want that 0 is well inside my domain so that's why I put the delta here that's the only reason and then I get this and now I use what I know on the derivatives of phi so I can write this guy as the sum of alpha x plus alpha t less than j nu of alpha x alpha t then I have minus 1 to the alpha t tau to the alpha t plus alpha x lambda to the alpha x and then this is delta to the 1 half minus alpha x 1 plus little o of 1 when little o of 1 is something that goes to 0 when delta goes to 0 so here I'm just using the decay of the function phi and so I know that this thing is equal to 0 now what I do now is that I take alpha x as large as possible so this nu alpha well can be 0 for some coordinates but it's not possible that all the coordinates are 0 because nu is on the sphere so sooner or later maybe the bigger coordinates are 0 but when I reduce I find the coordinate which is not 0 so just for simplicity I take alpha bar x to be the first coordinate in x for which there exists a t coordinate for which this is not 0 otherwise it means that they are all 0 so I take the largest alpha x for which there exists an alpha t in the right range with alpha t plus alpha x less than j and nu of alpha x alpha t less than 0 so if I write this this sum here actually spans with the condition that alpha x is less than alpha bar x because whatever happens above the alpha bar gives that nu is 0 so you are just summing 0 contribution ok so this reduces maybe the sum or maybe it doesn't reduce it alpha x is already giving a non-zero nu in that case you don't reduce it but in case the bigger alpha x is giving a zero nu then you reduce it to alpha bar ok so you have that 0 is equal to this guy now what I do is I multiply this expression by delta to the alpha bar this is a positive number so I just have alpha bar x here now I send delta to zero and what I get here well I get here oh sorry I multiply by delta to the alpha bar x minus one half the minus one half is useful because I remove this one half here now I send delta to zero so when I do this I have delta going to zero here so whenever alpha x is less than alpha bar this exponent will be positive and it will give zero ok so if I send delta to zero the only exponent which is remaining is the case in which alpha x is equal to alpha bar of x so let me write it like this here in principle I have alpha of x less than alpha bar of x but if I am strictly less I get zero so by sending delta to zero here I get alpha x equal to alpha bar of x ok so this is somehow alpha bar this is alpha bar this is alpha bar now alpha bar minus alpha bar gives zero so this gives one and when I send delta to zero one plus little o is equal to one now I have that this expression is equal to zero then let me multiply under to the alpha alpha bar x and divide by tau to the alpha bar x well if I do this these and these are zero but this means that you have reduced your problem by the alpha bar because now you have a polynomial in the variable tau which is equal to zero so in the identity principle of the polynomial the coefficient has to be zero but the coefficient is not zero for the way you choose alpha bar so this is a contradiction tells you that you cannot be contained in a linear space of rn plus one and so somehow it proves the theorem and it gives you that all functions are locally in the general the general structure for the general operator is somehow the same just one has to be a little bit more careful with this choice of of indices in order to reduce step by step by one what happens when I send delta to zero in the multidimensional case but somehow the structure would be more or less the same so in the next five minutes let just me recall that in the next lectures I would like to address a bit the regularity theory of non-local minimal surfaces so this is a topic that already shavi has introduced and mentioned so I will be very happy to take advantage of the material that is already introduced and so one result that I would like to present in some detail is the analog of Bernstein theorem for non-local minimal surfaces so Bernstein theorem in the classical case says that if you have minimal surface which is a graph so say a minimal graph and the dimension of your ambient space is eight or less then your non-local minimal graph has to be a plane so the classical Bernstein theorem says that if you is a solution of the minimal surface equation so it creates a graph which has zero mean curvature and n is less than always make it wrong seven, right, because I am in dimension eight in the ambient space then u is an affine function and in the classical case the dimension seven is optimal because if n is equal to eight Bombieri the Georgian justy constructed an example so this would be r8 and this would be a Simon's cone in r8 so think that what I drew here is that r8 has two coordinates x and y in r4 times r4 and what I drew here is the norm of x equal to the norm of y whatever it means in r8 and so basically they constructed saddle shape function in r9 so you have one coordinate more which is a positive here negative here and which goes logically to plus and minus infinity so this is a zero level set for the thing and this is growing here something like this and going to minus infinity on the other side so we will discuss somehow the analog of this problem in the fractional setting and let me say that the problem is still very open there are some results in this direction but we do not know whether the results will obtain are optimal and we do not have any example like this so the fractional case the complete picture is still very open but I will try at least to introduce some material that can be useful to see what's going on in this field and maybe my time is over