 OK, so good morning to everybody. Welcome to the second week of school conference. So the first announcement is that there is a big change of schedule between today and tomorrow because of some flight delay. So this afternoon, the first talk of the afternoon will be of Juan Luis Fasquez at 2. Then there will be another talk of Enrico at 3.30. And then the talk of Cabre, which was supposed to be today, is going to be tomorrow at 11. OK, so this is the change of schedule. So we can start this morning with Enrico. Well, you know she has a pleasure to have here. And we'll talk about an allocation under different perspectives. So I don't have a precise draft for my lecture, so somehow the lectures are open to your questions and comments. And I'm not sure what is the level of all of you. So if I'm going too fast or if I'm going too slow, just please stop me and we will discuss what's going on and adjust the level to your preferences and interests. So as a general comment, since we are in an Institute of Physics, I would say that, roughly speaking, there are two kinds of physicists. The one who thinks that differential equations are a good model for the world. And the ones who think that they are not enough to describe the complexity of the universe. So in a sense, a differential equation is something that is based on an idea that we have a quantity that has some interest for us, say u, which is a function of space, x, and time t. And we want to describe its evolution. And the differential equation has the answers that the evolution of u is due to the fact that there is some unbalance, some difference of u at some point here and there. And this infinitesimal discrepancy produces a flow that changes the quantity u in time. Now, the fact is that if you want to think that the world is more complex than this, then there are actually many influences on the quantity u that comes from far apart. Another possibility is to look to what the other half of physicists call the master equation, which is an object that involves an operator, which tries to take into account the function u with a suitable average with respect to space and time. So the natural object to look in this framework is some integral, say, in rn times r, thinking that x is in rn and t is in r, of u of xd minus u of x minus y t minus tau. And there is a measure which takes into account space and time. Well, of course, if you write things like this, this is a very general object. And in a sense, it's even too general to talk about it. So one tries to make some answers, in particular cases, that makes this object more amenable. So in a sense, if this is so generally it's not even a master operator, it's a monster operator. So one possibility is first to say that the space and time variables are somehow independent or uncorrelated. And so actually, the major mu is just the product measure of the measures in space and time, so something like this. And kernel has a product structure like a kernel in the space variable. And actually, I forgot to say that the kernel takes also into account, may also take into account x and t. And so the kernel, the kernel in which space and time are the couple. So there is a space kernel that takes into account x and y and the time kernel, which takes into account t and tau. So I'm using bs for space variable and bt for time variable, though the difference is somehow psychological because here the role of x and t is not really precisely defined. OK. So still, this is a very kind of general object, so there are some particular cases that I think are of interest. So the first case, the first important example is when one is looking, say, only at space variable. So I am forgetting the time kernel for the moment. And one takes into account the spatial kernel given by up to constants 1 over y to the n plus 2s. And s is a parameter. From now on, for me, it will be a parameter from 0 to 1. Now, if I put this kernel into that object and I only consider space variables, this becomes the integral of u of x minus u of x minus y over y to the n plus 2s in dy. So I can also write it as the integral of u of x minus u of, say, z divided by x minus z to the n plus 2s d z. And this is often called the fractional application of u. And in a sense, this is a very natural operator. And let me remark that this kernel is singular. So to rigorously say what this object is, since there is a singularity at 0, this integral has to be taken in the principal value sense, in the sense that by definition it's just the limit at epsilon goes to 0 of the integral outside the ball of the integral. And so we may think that this is a weird object to look at. But if we have time in this lecture, we will consider the analogies and differences with an operator that you already know, which is the classical Laplacian, for instance. A second example for master operators that we can take into account is the case, this slightly different case is when the spatial kernel, so this is somehow what I will call the fractional Laplacian, there is a second case in which I will consider a spatial kernel very similar to that, but in which I also have a domain omega. And in this case, the operator becomes this. Or if you want to write it in the z variable, u of x minus u of z divided by x minus z to the n plus 2s. And now x minus y, that is z, has to stay in omega, so the integral is just restricted to omega in the z. With the same convention, the integral has to be taken in the principal value sense. This operator will call it the regional or censored fractional Laplacian. And we will see that it has many features in common with the fractional Laplacian to Q. For instance, when omega is equal to a n, it reduces to the fractional Laplacian. But it's a structurally very different object, and we will point out several differences between the two later on. OK, so what else? One can say, are there interesting cases, for instance, of time variables in the kernel? Let me mention one, time variables, which is the case in which has the form a characteristic function of tau divided by tau to the 1 plus s. This time variable leads to what it's called the caputo derivative. And as you can imagine, it's a sort of derivative in the sense that dimensionally it's a fractional derivative of order s. But from the physical point of view, it is something that takes into account also the past, weight in the past, say from minus infinity to t, with the memory that decays in time. So somehow, the events that are very far in the past are weighted more, and they count less. And the ones that are more recent are more important to reconstruct this operator. Of course, in the classical theory of differential equations, especially elliptic differential equations, there is an important structural differences. Probably the most important difference in partial differential equations is whether or not the equation has a variational structure so that it comes from an energy function. And this difference is usually referred to the classification of operators in divergence or non-divergence form. So the divergence form is the case in which the operator is variational. It comes from an energy. And typically, the equation can be written as the sum of i and j, d i, a ij, d j. Or there is a case in which the operator does not come from an energy, and so in non-divergence form. And in this case, typically, the operator can be written in the form a ij, d ij. Of course, there are operators that are both in divergence and non-divergence form. If a is the matrix 1, for instance, the identity matrix, and then you get the Laplacian. But typically, these two structures are very different. And you can cook up cases in which A is a positive and bounded matrices, and the theory of the two is structurally very different. And somehow, I would like to propose you a little exercise, which is to recover this different structure directly as a limit of the master equation. So I write down the exercise. It's not completely trivial, at least it was not completely trivial for me. So if you have troubles in doing it, please pass by and we can discuss it. Little exercise is the following. So how to recover divergence and non-divergence structure as limit when s goes to 1 of the master operator? Well, the idea is that this difference of structure of the Aij is a difference of structure of the spatial kernel in the master operator with respect to the action of affine transformation of the space variable itself. Said like this sounds a bit complicated, but let me write what I mean. So the first case, one take the spatial kernel in x and y to be 1 minus s divided by some matrix m of x minus y, times y to the m plus 2s. So for me, m is a nice positive matrix. So say positive, strictly positive matrix. In some cases, you can relax these assumptions, but just for simplicity. Well, so if you do this in the limit, as s goes to 1, the master operator converges to the divergence form, the divergence for operator. For some Aij, we just adjust the average on the sphere of a kernel dictated by this matrix m. OK, so this is somehow the first exercise, the second exercise is the counter part. Ah, sorry, this exercise is not complete because I didn't put the structural assumption. So the structural assumption is that in this case, m of x minus yy has to be equal to m of x minus y. So second case is when the kernel is instead 1 minus s divided by xy, y to the m plus 2s, with the structural assumption that m is even with respect to y. So m of xy is equal to m of x minus y. Now in this case, the limit as s goes to 1 of the master operator is the operator in divergence form. And so, well, it depends how picky you are and how general you want to do the exercise. So I'm very modest, so I did the exercise taking m to be as smooth as you want, and the function you as smooth as you want. And just in the very smooth case, prove the convergence. I think that this is sort of enough in the sense that then you can, once you understand what's going on in the smooth case, if you are really strong, you can try to set the same convergence type of program in the distributional sense, or in the viscosity sense, according to the structure and your tastes. But for me, I mean, for me it was not even obvious that there was a natural way to reconstruct these local objects coming from the master operator and somehow to relate this property of putting the derivative inside the equation or not to a property that was coming from an affine transformation of the spatial variable of the kernel. So philosophically, it's not so clear that they are the same, right? Again, the assumptions look artificial but are actually quite natural, because if you write your variables x minus y, in this case as the new variable z, this condition here is telling you that you can exchange x and z. So somehow, the affine transformation and the kernel are charging as much to x as to the variable z, which is equal to x minus y. And this is why somehow it's coming from an energy function. On the other hand, this condition is natural if you think about it, because you want to be able to simplify linear terms in the numerator of the master equation. So somehow, this allows you to change y with minus y and don't change the structure of the kernel. So these are kind of natural assumptions if you look at the master equation in the large. And they are the right ones to recover the local operators. So if you want to think about it, we can discuss the proof of this if you have any troubles. OK. So this set, my proposal is to try to put our hands a little bit into, first of all, this operator called fractional Laplacian and to understand why it is similar, but also why it is different from the case of the classical Laplacian. So I think I have listed eight fundamental differences with respect to the classical Laplacian. So the classical Laplacian Laplacian operator is the sum of the second derivative. The fractional Laplacian is just, or sometimes, in this form. Of course, the two things are the same up to a factor 2 that I will forget. So in principle, I will always write formulas neglecting possible constants. So if you write the first expression, you change y with minus y, and you sum the two, then you get the expression on the bottom up to a factor 2. So if you look at this object, there is, of course, a natural similarity with the Laplacian, because also the Laplacian can be seen, in a sense, as a limit of averages. So one can say that the Laplace of view is just the limit as r goes to 0. Again, up to constants that I forget. 1 to the r to the n plus 2, the integral of Brx. So when you write things like this, you see the first immediate similarity between the Laplacian and the fractional Laplacian, in the sense that both try to compare the value of the function u with the average value of the neighborhood. And somehow, the fact that the fractional Laplacian or the Laplacian vanish, or are equal to a nice function, force the value of the function to revert or to be close to the values of the neighborhood. And this somehow suggests that this operator should possess some kind of nice regularity theory, because the function cannot oscillate too much. If the function at the point x wants to go too far from the values in the surrounding area, then the operator is pushing it down. And vice versa, if it goes too much to the bottom. Nevertheless, there are at least eight very important structural differences between the two operators. So let me try to go over these differences with you. So the first difference is obvious, which is locality versus non-locality. Large? OK, sorry. So everybody missed what I wrote till now. Sorry. OK, this is maybe an obvious difference in the sense that the Laplace operator is local, in the sense that it depends only, if I want to compute the Laplace operator at this point, it only depends on the values of the function nearby. While to calculate the fractional Laplacian, I really need to know the values of the function everywhere. And so for instance, suppose that you are in R2, and you have a function which is compactly supported here. Well, when you compute the Laplacian at the origin is 0, but when you compute the fractional Laplacian, you see the values of you near here. And so the fractional Laplacian or the function like this at the origin is not 0. OK, a second structural difference, which is related to that, is that points far away fill the influence of the other points. So influence at infinity. Well, if you have U which is smooth and compactly supported, or more generally in the Schwarz class of rapidly decay function, then of course, at infinity, the Laplacian of U is 0. Say if U is infinity, or it's extremely small, it's more than polynomial is small if U is in the Schwarz class, while the fractional Laplacian of U at infinity only behaves like 1 over x to the n plus 2s, say for large x. So somehow the fractional Laplacian has a long tail, and there is nothing you can do. You have to keep this long tail at infinity like a memory of what happens near the origin. And you can easily compute these for a fixed, nice function compactly supported. You can try to do the computation also in the Schwarz class, which is just slightly more complicated. And you can check that this decay is actually optimal. Just put in a bump close to the origin and see what happens far away. Again, if you have troubles in these computations, just pass by and we'll discuss or we'll do it in the lecture. OK, third structural difference. OK, this is very deep, I think. So harmonic functions are functions whose Laplacian is equal to 0. S harmonic functions are functions whose S Laplacian is equal to 0. And in principle, we may expect that, OK, it's difficult to imagine in higher dimension how a harmonic function looks like. But one can expect that more or less harmonic and S harmonic functions look the same. But this is actually absolutely not true. And there is a kind of amazing fact in the sense that while the local geometry of harmonic functions is very rigid, is basically all prescribed, the local geometry of S harmonic functions is completely free. So for instance, in 1D, if I write a function like this, for instance, the parabola, this function is not harmonic. Of course, you can take two derivatives and say it's not equal to 0. Or you can say it as an interior minimum, and so it cannot be equal to 0 for maximum principle. Say it's not harmonic in minus 1, 1, for instance. Nevertheless, what happens in the non-local case is that maybe this function is not S harmonic in minus 1, 1, but is completely undistinguishable from an S harmonic function. In the sense that for any epsilon arbitrarily small, there exists a function U epsilon such that it is S harmonic in minus 1, 1. U minus U epsilon, in any given CK norm, is less than epsilon. So somehow your original function, for instance, x squared, but any given function would be the same. So maybe your original function is not harmonic. Nevertheless, you can consider a small livelihood epsilon of your function and find nearby a function U epsilon, which is S harmonic. And the reason for this unexpected S harmonicity really comes from the non-local contributions outside minus 1, 1. So somehow your U epsilon is completed in some crazy way. Then you can also make it compactly supported, if you like. Completed in some crazy way so that this oscillation outside compensates the points in minus 1, 1. So to make the total integral defined in the fraction Laplacian equal to 0. So in the next lecture, if possible, we will try to make the proof of this fact. The proof will be actually rather simple in a sense. And let me say this is a result obtained with Serena de Piero, the video serving. I think it's a nice result. And as you will see from the proof, the proof is not really constructive. So the picture I make is not a real picture in a sense. I don't really know how the function has to be to compensate the oscillations. Nevertheless, if we speak about pictures on the blackboard, any picture on the blackboard is correct because I can apply the same theorem at a larger scale. So I drew this picture just arbitrarily. But this picture is somehow correct in the sense that if the orange function I drew is not as harmonic, then I can apply the same result in a larger ball and say, OK, this function maybe is not as harmonic. But up to epsilon, I can modify it and extend it to an S-harmonic function. So the picture I drew on the blackboard is correct up to redrew a new picture on a higher scale and so on. So basically, any picture that you know in a finite domain is the picture of an S-harmonic function up to a very negligible error if you complete the picture outside in the appropriate way. Well, this is a nice or horrible fact. I mean, usually when I speak about these results, there are people who come to me and say, oh, this is a beautiful result, or people who come to me and say, oh, this is a very disturbing result. So it depends on your feeling about the world. You can be happy to have such a rich environment to work in. Or you can be somehow disgusted because, of course, proving classification, regularity, and rigidity results with this object to fight with may be somehow unpleasant. And this leads to the fourth structural difference. Most structural differences are in inequality. But inequality says that suppose that the Laplace of U is equal to 0 in, say, B2 and U is positive, the inf and the sup of U are comparable in the sense that you can take, well, the sup is bigger than the inf. But up to a constant is also smaller. And this, for classical harmonic functions, is a simple thing in the sense that, for instance, you can say that U at the point equals to the average of U in the ball. And then, since U is positive, deduce that the oscillation has to be controlled in this way. For fractional objects, things are more complicated. And you have a counter example here on the blackboard. So if you start with x squared and you take epsilon small enough, your function U epsilon will have an interior minimum. And so you can translate vertically the function to have a minimum exactly at level 0. So somehow the same statement for fractional harmonic function is false. The statement we will see becomes correct if instead of asking U to be bigger or equal than 0 locally, you require U to be bigger or equal than 0 globally. So again, you need the global constraint on the function to recover a local information. OK, fifth difference. This is also something that when I noticed the first time, it was very shocking for me. I don't know who noted it. When I discovered it, I was shocked. Now, historically, I don't know. I think that the first, I don't know. Maybe there are these papers by Blumenthal and Ghetto in which they computed explicit solutions of harmonic functions, so in particular, they had to discover this. I'm not sure they were the first. There might be other reasons for this. So I will tell you what is bothering me. And I will make explicit examples of this situation. I'm not sure that it was not possible to deduce this information just by general facts. I don't know in harmonic analysis. But I will try to give an explicit computation for what I have in mind. So fifth difference is growth from the boundary. OK, so suppose that you want to solve Laplace of U equal to F in B1, and say U equal to 0 along the boundary. Then what happens is that U grows linearly from the boundary. So X in B1 is less than, up to a constant, 1 minus X and superf. OK, which seems somehow quite natural. You have your function U. You have to hit your boundary. And then you have to grow from the boundary because F forces it. How it forces? Well, the value of U only depends on the distance from the boundary. So it's sort of a linear growth from the boundary. Now it's kind of annoying, but this is not true. This is not true in the non-local case. For instance, the function U s, which is 1 minus X square positive part to the s, is S harmonic. And you see that the growth from the boundary is not linear anymore. Near the boundary grows like the distance to the power s. So if we have time, we will make a proof, at least in one dimension and at least when s is equal to 1 half of this factor, because it's very simple and nice proof. The general case is more difficult, but at least one dimension and s equal to 1 half is nice and simple, and we can do it by picture. But just to say that it's kind of striking that the growth from the boundary is not linear anymore, but it's like a held growth of exponent s. Different six is somehow related to this, which is the boundary regularity. The boundary regularity says, again, if you have a solution of Laplace equations in nice domains like both, then U is smooth up to the boundary. So suppose that U is a solution of this equation, then gradient of U is bounded in the whole of the world. Is s-harmonic, sorry, in b1? What's wrong with the function? No, sorry, not s-harmonic. I wanted to say equal to 1. So it solves this equation with f equal to 1. If you want s-harmonic example, we can take x plus to the power s. And this is s-harmonic x n bigger than 0. So yes, I put this example because it was related with this with f equal to 1. If instead you want an example in which f is equal to 0, well, the same unpleasant growth from the boundary holds true, and the example is just the blow-up somehow of the previous example. One of the previous examples. OK, and these examples also show that the uniform gradient estimate is not true in the non-local case, in the sense that all these examples US and BS show that the gradient of the solution blows up near the boundary because the gradient is like the distance to the power s minus 1. So the growth from the boundary is like x to the s, the distance to the s, and the derivative is the distance to the s minus 1, and so on. This is in fact a general fact that there are several papers, especially by Xavier Rossoton and Joachim Serra, who discussed the regularity from the boundary and show that somehow these examples are the paradigmatic ones. Somehow they model all the possible growth from the boundary and all the possible regularity of the gradient and of higher derivatives near the boundary. So again, there is nothing we can do with that. And it's an important difference with respect to the classical case. Seven, different seven, the case of explosive solutions. So again, in the classical case of the Laplacian, if you have a function, a nice solution of the Laplace equation in a ball, the values at the origin have to be compared from the bottom and the top from the values at the boundary. So you cannot have a harmonic function which explodes at the boundary of the ball. In the fractional case, you do can. And there are several examples of explosive solutions constructed by Nicola Batangelo, Patricio Fielmer, and others. And just to make an example, since the time is a little bit short, I will try to show you one of these lectures that, let me call it u minus 1 half over x, which is 1 minus x squared to the minus 1 half if x is less than 1 and 0. This function is a harmonic. So again, this is a very striking example, because you want to make a picture. Basically, what happens is that the function has value 1 in 0. Then it grows and it diverges near the boundary. And then it's 0 here. So this is very striking. Again, if we have time, we will make a proof of this fact again in a very easy way by pictures. But just to convince you that this is not completely unreasonable, one can say, how comes then the fractional Laplacian at this point can vanish? Nearby, you have a very convex thing. So the fractional Laplacian should have a negative sign. OK, fine. Here the values of the functions are above the values at 0. But you also have a long tailed contribution of the points coming from infinity for which the function is below, because the value here is 1, the value here is 0, the value here is 0, and so on. Somehow, by miracle, these contributions exactly compensate in the integral. And so you get that the fractional Laplacian is 0. Probably my time is over. Sorry, yes. This is 1 half. But the example is general. You can put a different S and get a different S there. Just I focused on the simple example, because time was getting shorter, but thank you. This is 1 half. So since my time is over and we have the coffee waiting for us, I will do the eighth difference in the next lecture. And we will keep talking about these examples and maybe proving that all functions are S-harmonic. I'm not really sure I can do it in one hour in case we'll keep doing this in the subsequent lectures. So if I was going too fast or too slow, please stop me and tell me. And if you have questions on the things I'm discussed, I'm here and happy to chat.