 OK, so veliko na sessonu mordiče. Zdaj smo vse lektu nr. 4, Riko Valdinoviči. No, Riko, polj. Zdaj, različno, da se vši srečno različno, potrebno da sem vse koment nr. 1. lektu, o vsej vsej vsej vsej vsej vsej vsej vsej, krjaviti operacije v začenji in nezavržično formu. Mestronik vzice je začel s spasčeljo kernel in da postulate skupa začel in začel očkvesi začetno formu. In tako, da je je vsega, na različenje, izgleda na operator, da je vsega vsega vsega vsega vsega. Vsega vsega vsega vsega vsega vsega, je vsega m vsega vsega vsega, je vsega vsega vsega vsega vsega, V stylijneja izvihla, da M v zelo x najmoj n' y je matem x najmoj n' y. Zaks imave začak ili zelo x najmoj n' zelo zelo. A zelo xva in zelo je unikten. Ok, in taj operator, as goes to one, rekoveri operator in divergenc form, DI, AijJ, J, in ki sem ne mislil, AijJ je tako konstant, ki so vzivim, in taj intergal v sr minus 1, omega i, omega j, taj z vzivim, m, x0, omega 2s. M je vzivim matrič, vse je vzivim, je vzivim, poživljenja in postovozit, ki sem bil ess vse materijalno drugi, peroj je več toga, da smo bilo zmizili se. Očenjem sem drafto, stainless steel, in izmizila je 1 minus s, the integral over Rn, 2 over x minus u over x minus y in dy. je to nično dobro. Svečne je te matriče, na kaj pošli so ki je pošli m o x, y z tem, da je m o x, y je to n- m o x, minus y. Nel je to počne, nekaj ne vse neč nezabre, nekaj nezabre, nekaj nezabre, zelo je to nekaj nezabre, As goes to one, in this case one recovers the operator in non divergence form, and I think that AIJ up to constants that I neglect should be like, the expression is the same formally, but actually somehow. Ok, and how to prove this, so let's say this is divergence form, divergence form. So to prove that the divergence form is more tricky I think, one approach that is a little bit simpler is to look at the energy. So to notice that this condition induces naturally an energy function, an energy function and pass to the limit the energy functional. But if you really want to pass to the limit, the equation is a little bit more tricky because if you pass the limit the energy functional is not completely obvious that the derivative of the functional in the limit is the limit of the equation. So somehow there is an exchange of limit that one has to check. If you just want to pass to the limit the equation let me tell you what is a possible strategy. A possible strategy is, well first of all to notice something that is useful in itself that is this formula that if you have a function phi, which is positively homogeneous of degree. Say 2 plus alpha. Yes, so somehow if you call x minus y z. Ok, let's briefly discuss that. So you are saying, you want to call it x bar. Ok, so let me erase this. Yes, x is a fixed point at this level and I'm integrated in y. But the idea is that if I look at the associated energy functional, what is the energy functional you would guess from there is u of x minus u of x. Yes, yes, yes, but I mean the only way I understand the condition is in the energy functional. Ok, so if I call x minus y equal to x bar it becomes then m of x bar x. It's important not to miss the sign. So y is x minus x bar I think and this is m of x and then it's minus y which is x bar minus x. So if you exchange x with x bar is exactly this and the energetic nice thing that you get is that the functional that you would like to consider now becomes this. And now you see that x and x bar are charged somehow the same by the energy functional. So in a sense the easiest approach to this kind of problem is to look at the energy functional. Oh, I forgot the one minus s. And pass to the limit the energy functional. Nevertheless, if you want to really pass to the limit the equation is also I would say an instructive exercise because it requires some computations that are always good to develop intuition. And in that case, ok, I'm not saying that it's the most elegant proof that you can do, but one sort of straightforward proof is based on this observation. The first thing is that it's easier to just keep the right order in the Taylor expansion and just deal with the homogeneous function. So a little lemma that I think is useful in general is that suppose that phi is positively homogeneous of degree 2 plus alpha, say alpha bigger or equal than zero. Then one minus, so this means that phi of tx is equal to t 2 plus alpha phi of x. And then you can compute explicitly expressions like this, of course, just by homogeneity, but just let me write it because I'm going to use it several times. Well, there is a factor that is probably one half, though I forgot the factors there, so it's not really important. So basically, ok, it's the same expression, but this integral is in Rn, and this integral is just on the sphere, and the rest simplifies by scaling, by homogeneity. Sorry, here, right? No, I think it's ok, because you have integrated the raw to the, let me see if I show you the computation, right? Just plug the raw outside, then you... It dies, ok, ok. So let me say to everybody why it dies, so I use polar coordinates. So if I use polar coordinates, from the first integral I have two integrals, right? One in the variable that I call raw, the radius, and the other that is the spherical average. I integrate the radius, and now the magic is that I choose... Why I choose these exponents, because they must be the right ones, so I found them by trials and errors, of course, but these are the right ones for which the raw variable integrates. It kills exactly this factor, and the two becomes... comes because the factor that you get is 2 minus 2s rather than 1 minus s, but... And then you are only finished with the spherical integral. So, with this one can do the divergence form case. The divergence form case can be done in this trick, so I'm not really suggesting that it's the best approach, but it's still one possible approach. That is first to expand the u function, expand the term u of x minus y, which is on the top of the integral up to y cube. Second, it's convenient in this case to fix x and call m of y to be the matrix at the denominator when we think of it as a function of y, and so expand up to again order 3. Well, once you know m, you know the denominator. It's a little bit more painful, but it's the absolute value to some power or something that you know, so you know the first order, you know the second order, and so on. Then when you do things like this, you put everything in the formulas. You are patient enough and you obtain somehow things that are positively homogeneous of degree 2 plus alpha, with alpha I think equal to 0 and equal to 2. And so what you get using, let me call star. While using star in the limit, you obtain something a little bit unpleasant. Well, one thing just to be very precise, in the Taylor expansion I expand up to order 3, which means that I have also the gradient and the second derivative. While here in principle I don't have the gradient, so I have to subtract brute force from the equation. So use that the map that sends y to nabla ux dot y divided by m of 0, y to the m plus 2s, so then use star. Then what you get, if I'm not mistaken, is something like up to constants. Let me actually, I made a little mistake, the sign of the operator, because for the way I wrote that the Laplacian is the negative Laplacian. So if you do this, what you get is almost what you want, because it's like aiju plus bjuj, with aij. Ok, let me write it so that if you want to do the exercise you have it, it should be 1,4 the integral over sn minus 1, omega i omega j divided by mx0 omega to the n plus 2s, because you pass to the limit, and then you have bj, which is n plus 2 over 2, the integral over sn minus 1, omega i omega j, 0 omega omega over m0 omega n plus 4. Ok, which seems that we missed the point, because we are not getting something in divergence form in principle, but now one uses the structural condition, check that bj is actually the derivative by ij, with repeated index. Ok, so in a sens you can go, you can start backwards, I mean you can have something like this and you ask yourself what is the right condition I should put, the right condition that you want to put is something that recovers the divergence form in the limit. The other part is easier. Ah, sorry, sorry, is that I'm always used to write n plus 2s that when I don't have an essay, we'll always make an error. Yes, and of course let me say that the advantage of working in sn minus 1 is that this guy is not seeing you anymore, you are just on the sphere. Oh, I didn't say, but I'm assuming always that m is a smooth and non-degenerate matrix, so the denominator only vanishes when the point is zero. So it doesn't vanish on the sphere. Ok, no, the other part is much simpler. The other part is much simpler because you just expand the function up to the second order and you use the symmetry of the symmetry required in the condition to subtract the first order. So, somehow the non-divergence case is easier. I would say it's easier because there is less structure, right? It's a miracle that Aij is the derivative, that Bj is the derivative of Aij, so it has to be easier in the sense that the case in non-divergence form, because in this case you just use one in the divergence form, so expand U, expand the matrix M, you can also expand the matrix M up to order 1, or the vector mxyy up to order 2. You see that things have to be easier because you are not expanding to order 3, so you are saving one order. Of course, I didn't know in principle, I just noted it when I did the computations. Now, the tricky part, as we did here, we have to add the gradient into the equation to kill the first order. And to do this here, the gradient that you have to add or subtract to the equation is this. If you want to do this, you have to check that this guy is odd, and the reason for which it is odd is exactly the structural condition. You see that the condition there says exactly that when I change y with one minus y, I get the same, and then use the positive homogeneity of these things, I think with alpha equal to zero, so it's really the easy case, and you get the result. Ok, so not particularly fun, but I think it's kind of instructive, because it gives a flavor on how the master equation can recover a subject that we more or less know in the limit, and we sort of understand what is the geometric structure of the master equation, which reflects in the end into a divergence of a non-divergence form. So it can also be a suggestion if one wants to prove something inspired by the non-divergence form case to understand what is the right structure to put on the general integral operator to have some hope to prove something. Well, I don't know. I think that people call it master because it's the original equation, right? In a sense, the people who are skeptical about differential equations think that differential equations are too cheap because they just want to model what happens nearby, but the real equation, I think, the master equation, am I correct? The master equation is an integral equation because it takes into account all the interactions of all the particles in the world averaged by a probability measure, so I think that the language comes from statistical mechanics. Yes, and in principle, when I wrote the master equation, I wrote it in space and time. And then we put, so we wrote this huge equation and we said, OK, but that's too hard. Let's make some cases and the typical cases were the cases in which the kernel was the product of special kernel and time kernel. This is saying that the probabilistic motion that drives the system is actually independent in space and time, that two variables are uncorrelated. Well, this is a simplifying equation and then another particular case that we treated was exactly the case of divergence and non-divergence form in the space variable, which in particular comprises the fashion a laplacian. Another case that we consider was the case of the original fashion a laplacian that was just put in the right characteristic function in the expression of the kernel, and also we comprise the case of the caputo derivative, again put in the right time kernel in the expression. Of course, there are much more, but this is just to give at least four concrete examples in which the master equation reduces to something that we know. Well, and that's it. So in this sense, what I mentioned yesterday that we could try to study the bit is this topic of non-local minimal surfaces that Chavi has already very nicely presented and he gave a lot of very beautiful results. I will try to do very few things, but to do somehow all the details. So we don't study much, but at least what we study, we try to know it. And so the first problem that I would like to address, as I was saying yesterday, is this Bernstein problem that tries to classify whether or not the minimal surfaces that are graphs, so I will call them minimal graphs, are affine. So for this, let me first recall the notation. So the notation that Chavi was mentioning yesterday is that there is a point-wise interaction between sets. So if you have sets x and y in Rn, say that they are disjoint, and measure what I will always forget about the measureability, then one can define, well, let me call this big n rather than small n, you will see in a minute why. Now I don't remember what is the exponent. You put s or 2s here, alpha, OK. Excellent, excellent. So that because there are two tendencies in the non-local minimal surfaces case, one is to call this exponent 2s, and the other is to call it s, which is kind of annoying, but Chavi was very good in putting alpha, because in this way alpha goes from zero to one, and it doesn't create confusion with the s before. OK, and then, if this is the interaction between sets, you have a domain omega, you have a set e, and you say that the alpha perimeter of e in omega is now the collection of all the interactions between e and the complement of e, which involve the domain omega. So it's this i alpha of e in omega with the complement of e in omega, then there is the interaction of e in omega with the complement of e outside omega, and then there is the interaction between e outside omega and the complement of e in omega. And, of course, as you see, it's almost impossible not to make a mistake. If you want to check that what is written is correct, this should be the same if I change e with its complement. So whenever you want to check that you didn't miss any complement or you didn't put any complement where it was not supposed to go, then you can check that the alpha perimeter of e is the same as the alpha perimeter of the complement, as it should be because it takes into account the interaction of a set with its complement. So it's a metric under the complementary operation. OK, so one looks at alpha minimizers that are local minimizers of this functional among all the sets that are fixed outside omega, and then comes to study the properties of the minimizers of this object. These objects were introduced by Kaffarelli, Prokzhov, Savin in 2010, I think, and they are very useful for many things. For instance, somehow are the limit interfaces of long range phase transitions and they naturally arise in the case in which one looks at cellular automata, for instance, there is also a paper by Kaffarelli and Suganidis about this with this motivation. They naturally arise in geometric flows, there is a paper for instance by Inbert and so on. Right now what I would recall is that as Ciavi was mentioning, minimizers of the alpha perimeter have the property that their alpha mean curvature vanishes at boundary points. So suppose that e is an alpha minimizer, let me suppose that the boundary of e is smooth enough, I think that it is what is c1 beta with beta bigger than alpha, probably, or smooth enough. If I am wrong in the exponent, just say smooth enough, then take x on the boundary of e, then the alpha mean curvature of e at the point x, which is defined to be the integral over Rn of the characteristic function of the complement of e minus the characteristic function of e, put it y, divided by the kernel, this has to be zero. Ok, you may wonder why I put the characteristic function of the complement minus the characteristic function of e, because everybody would put the characteristic function of e minus the characteristic function of the complement, it sounds nicer, and you can do that because it would be zero anyway, but the fact is that you write it like this, then the ball would have positive non-local mean curvatures. So it's nicer to have the intuition that something convex has a positive curvature, whatever it means. Ok, so the idea is to try to say something on objects which minimize the perimeter, the alpha perimeter, or on objects which have, for instance, zero non-local mean curvature and so on, or constant non-local mean curvature more generally, as Csabi mentioned. Let me say that I put in brackets the fact that the boundary of e is smooth enough, just because I'm probably going to use this information today only for smooth sets, but the non-local mean curvature equation actually holds true in the viscosity sense, even if the boundary of e is not smooth. So this is just, if you like, a technical complication, a very non-trivial technical complication, but today I will probably just deal with smooth sets. And so the first result I would like to present to you. Well, one way, for instance, to find the expression for minimizers is to integrate by parts, these things, and you would get integrating twice. You will write these in divergence form. You integrate by part twice, and you have somehow the integral on the boundary of e. And so whenever you make perturbation, you perturb from that one of the two integrals goes away because it's the perturbation setting, and the other is exactly like this after one integration by parts. So in other way, you are looking at these things, is that you are taking averages of the normal. So one way, maybe if Chavi remembers it by heart, is that one can look at the alpha perimeter of e in a Renn, for instance, as a double integral of the product of the normal and now I have to be careful with the kernel, it's what n plus alpha plus two, so plus two or minus two, maybe minus two, right? Plus two. Well, one of the two, but let's think about, this guy has to come from the divergence of something. So the divergence, so this guy is minus n minus alpha, so should go to two minus n minus alpha, so should come in this way. Okay? Let's say that this I'm doing by heart, so I may be wrong. Just double checks in the textbooks, but the idea is this, that somehow you are averaging the normal over the set, dividing by a kernel. If the set is smooth enough, you can take perturbation of this thing and you arrive at the integral inside. You integrate back by parts, the integral inside, and you get something like this. At least roughly speaking, this is the idea. Okay, so the first theorem I would like to present, something I've been doing with Alberto Farina. Okay, and so one thing is that I will look at graphs. So if I say that a set is a graph, I always mean is a graph in the last direction. So for this, it's convenient to write that big n is n plus one. And if I say that e is a graph, well, this means that e is a subgraph in the last direction. So e is the form xn plus one less than u of x1, xn. And I sometimes will call this variable x and the variable bx to be xn plus one as a point in Rn of the form r little n. Okay, so if I do this, I erase everything? Just because I was here, because Francesco introduced me from here and so it was easy to do this. It's not a political statement. So he touched the blackboard at 90 degrees exactly when he hit the blackboard and then he moved back. Mimizacional problem. Okay, so the result somehow says that minimal cones, oh sorry, I say that e is a cone if it is homogeneous of degree one in a sense. So if Te equal to e for any t bigger or equal than zero, for any p in e and for any t bigger or equal than zero, tp belongs to e. And so the theorem I would like to present is classification theorems, the classification theorem of all the alpha minimal cones, so the alpha minimizers, which are cones that are also graphs. So suppose that e is an alpha minimal cone and the graph, which means that e is of the form xn plus one less than u of x. And suppose that u is sufficiently smooth. Now, you cannot be in principle very smooth at the origin because the fact that e is a cone makes u homogeneous of degree one because you put the t here, the t here, the t has to go out so that it simplifies. So we know that u is homogeneous of degree one unless u is flat, it cannot be very smooth at the origin. So I will assume that u is sufficiently smooth outside the origin, beta bigger than s, I think. Then e is a half space. Or if you prefer u is an affine function. How to prove these results? Well, once one knows the proof, the proof is not difficult. The idea comes from an old paper by the Georgi published in a school of normality peace in which it's one of the first papers on the Einstein problem. And so the rough idea is that one can look at the equation for e, which is this one. It has to write this equation in terms of u. Take one derivative and use the maximum principle to show that the derivative of u is actually a constant. So for this, so let me do first step, is write an equation u. Well, this part, of course, was not in the original, the Georgi approach because the Georgi already had the equation. But here the idea is that one has this guy and one can integrate this guy in the vertical coordinate. So somehow my approach is this. I take a linear function, so I fix x on the boundary of e and I take little x less than 0, so that I mean a smooth point. And I take the tangent plane at this point, so I take the graph yn plus 1 less than gradient of u over x dot y minus x plus u over x, so I have something like this. I have my graph e, and I have e is somehow whatever is below this line and l is whatever is below the straight line. And so I can actually write that h alpha lx at the point x of the boundary is 0 just because of symmetry and then subtract it to h alpha v. This is just to be, I would say, on the hygienic side because I'm somehow removing any possible singularity that comes just from the fact that I'm not looking correctly at the cancellations near the point x. So if I do this from this fact and the curvature prescriptions for e and if I integrate by fubin in the vertical variable if I'm not mistaken I have something like this. I have an integral with the condition that u of y less than yn plus 1 is less than u of x y minus x minus and another integral in the reverse order of my kernel. The kernel I write it just once, but it's the same kernel. I'm writing n as n plus 1. OK, now this is a general observation. You are integrating this kernel with the last coordinate between two functions, so I think it's useful just to write what happens once and for all. So the real issue is to integrate yn plus 1 between two functions. I think I put f here. And these are nice functions in the sense that when I compute y equal to x I have the same value here and here and this value is exactly u of x which is my last coordinate. So somehow this means that I'm taking the right point. So if g and f is also this property and you can write more or less explicitly this guy in this form. So it's an integral of r to the n a variable that I'm calling here theta to the minus n minus alpha and then I have f of the incremental quotient of the g minus the incremental quotient. And this f is the auxiliary function that was introduced by Chavi yesterday. I'm not sure we have the same normalization. Doesn't really matter, but what I wrote in my notes was just I'm integrating f from zero. I mean, the only important thing that you will see is that the derivative is 1 over 1 plus t square to some power which is a nice function decayed to infinity and also actually the second derivative is a nice function decayed to infinity and so on. So somehow the expression for f is very ugly but the derivatives of f are much nicer than the function itself. Ok, so what happens is if you apply this to this f and this g and vice versa what you get, so you can apply the thing to f and g and vice versa and you can actually change theta with minus theta when you do this. The equation that you get is the following. So the equation that you get, I'm calling it calligraphic f of u at a point x is just the integral over rn. Now it's very important not to mess up the signs minus n minus, then I have f of the incremental quotient with plus plus f of the incremental quotient with minus so I hope I did not mess up the signs and you may wonder why the gradient disappeared well the gradient disappeared because I'm writing this equation and I get the plus theta part then I write the same equation changing theta with minus theta the sign minus here will make the gradient disappear and the part with the incremental quotient should sum up that's what should happen. So if you do this, the next step is of course to differentiate, so the first step was to write an equation for you we wrote an equation for you, it's ugly, well it's what it is then differentiate the equation in some variables so we fix a direction, look at the derivative of u and we take a derivative of the equation to see what is the equation satisfied by v so the equation satisfied by v is the following so it's what I call the linearized equation at u of v at a point x sorry and this guy was zero this is the equation and the equation for this is that the linearized equation is zero so I write what this guy is it's the integral over rn now it looks uglier than before but you will see it's not that bad minus n minus alpha minus 1 and then I have let me do a very drastic thing let me erase everything because I will need a lot of space just to write the equation so I will use the black word up to there but no panic, things are going fine hopefully so this is f prime of the first incremental portion that I wrote it with plus and then I have to differentiate inside so I get v of x plus theta minus v of x then there was a theta that I put common factor here because I will have the same here I know that I will have the same here that's why I put common factor then I have f prime of u of x minus theta minus u of x divided by theta and then I have v of x minus theta minus v of x and I was too optimistic not to erase also this part ok, v theta and all of this is zero ok I know you may not like this expression but let me try to convince you that it's not so bad so as I was saying u is homogeneous of degree one which means that v is homogeneous of degree zero so now the third step is use maximum principle well but consider so this is not immediate but so yesterday night I checked if you yes you may have to write this in principle value this I agree but nevertheless when you arrive here if u is c to beta with beta bigger than alpha then this all this junk here is integrable and if you only write 1f is a little bit worse because there is somehow a constellation that comes here from the fact that you have a minus when you look at this well if f was the identity here is somehow the fractional application with the second order incremental quotient I agree but at this level at this level I don't think that you even need the principle value but at this level you may want to put principle value let's say that all the integrals that I have but also if you want here you have to put the principle value to start with why? so I thought it was safer to symmetrize it like this but if you say that one could have been enough well I have to admit that I have this idiosyncrasy also to put the fractional application with the plus and then the minus so also to write the second order thing so it may well be that you don't need to symmetrize here it just reminds me more the symmetric structure of the fractional application it looks more a second order operator when you write it with the plus and the minus but I agree I mean you will see the proof and you will probably tell me look you don't need to have because the proof works for one side and for the other in the same way so we are already over time yes okay so I apologize I will finish this tomorrow tomorrow okay sorry so do you have questions?