 Well, after I take out some leading order terms, I can rewrite it as exponential minus beta over 2 f and mu v plus the confining terms which are there just to keep the particles inside sigma. OK. And so the temptation is to say, all right, now I have a sort of somehow an expansion of fn ah, sorry. There's something I should add. There is an n log n over d if you're in dimension 1 and 2. It just goes here. So we are tempted to plug in here n w bar of p bar. So showing that this is legitimate is essentially the question of proving a large deviation principle. So since we have a lower bound, we will always have an upper bound here because there is a minus sign. So I can always bound from above. So if I write kn beta, or if I look at the probability of something, I can always write bound from above by exponential minus, blah, blah, blah. So that's the first bound. But now I have to deal with dx1 dxn. So what is the connection between x1 xn and limiting empirical fields p bar? It's not so obvious. So what you have to do here is you have to look at the configurations x1 xn, whose limiting empirical field will look like p bar. Otherwise, why am I allowed to replace by p bar? And what you have to do is you have to compute the volume of these configurations, or at least you have to compute the logarithm of the volume of these configurations. Another way of saying what I just said is you have to understand how many microstates create this certain microstate p bar. That's how physicists would say. So we have a model for things like that. It's Sanoff's theorem, which you may know, which says that if I'm not mistaken, the probability that 1 over n, so take just points that are uniformly distributed in sigma, right? So xi just uniform with a uniform law in sigma. The probability that this empirical measure lands in a ball centered at mu and of radius epsilon, so a ball for some topology on probability measures. Doesn't matter. It's essentially like exponential minus n integral of mu log mu. It's the entropy that comes out, right? Integral of mu log mu is the entropy. And so it tells you that the volume of configurations that whose empirical measure looks like mu is logarithmically related to the entropy of mu. So if you're computing volumes, you should see entropies. And so here we have to do a similar thing, except we're not dealing with empirical measures, but we're dealing with empirical fields. And so we have to use some sort of Sanoff theorem for empirical fields. But there is such a result. You can do it. See, for example, the books of Rasulaga and C'est pas là in hand. They present that very nicely. So there is another entropy, which is called a specific relative entropy, and which does the job for you. So I'm not going to completely define it, but I'm going to define the specific relative entropy of my empirical field with respect to the Poisson point process. So p is there for Poisson. It's going to be an average over sigma of the specific relative entropy of the p bar x. So it's just an average with respect to Poisson. And this one is defined by taking large box limits again. So you take a cube of size r, and you compute the usual entropy of your point process restricted to the cube relative to the Poisson point process restricted to the cube, and blah, blah, blah. So we have an entropy functional. The Poisson point process, by the way, it's the point process that you get when you throw your points at random without interaction, roughly. So it's the point process that we would get if we didn't have the interaction terms that makes the particles repel. So a relative entropy always measures, in some sense, a distance. Relative entropy with respect to something is how close is a measure of how close you are to this thing. So here you are measuring how close your point processes are to the Poisson point process of suitable intensity. OK, and so now I can phrase the results. So you see this entropy definition will be able to deal with computing the volume of configurations that achieve a certain empirical field, p bar, or that are near a certain empirical field. And so now we have a theorem, which is valid in all dimensions. And that I described with Thomas LeBlé is that there is a large deviation principle at speed n with rate function, the following function r. So let's call it curvy f beta. So it's 1 over beta, beta over 2, the w bar of p bar, plus t relative entropy here. And of course, you have to take this minus its minimum, whatever that means. So the rate function is f bar minus the minimum, f beta minus the minimum of f beta. OK, can you see with the lots of layers of chalk? All right, so the naive writing of the result is that the probability under the Gibbs measure that your empirical fields look like p bar is decaying like exponential minus n f beta of p bar minus min of f beta. OK, so as a consequence, everything that's not the minimum, or that's not a minimum, will have exponentially small probability. And so it means that the Gibbs measure concentrates on minimizers of f beta. OK, so what are minimizers of f beta? Well, we expect them to be those limiting point processes that we don't really have a hand on. So a corollary of this result is in the cases where we know the limiting point process, like sine beta or the Gini ensemble, well, we obtain that sine beta minimizes this functional. The analog of this functional not average. And same for the Gini point process. What you see in this functional is that there are two competing terms. There is the term w bar. What does this term want? Well, this term, we think it wants order. We sort of think that lattices are minimal. So this term is a way of measuring how disordered you are. And the entropy term, what does it want? It wants to be close to the Poisson point process. So it's kind of the opposite. It wants to be very disordered or very independent. It wants the point to be very independent from each other. And there is a competition between the two. And you see here the effect of temperature. Temperature is there in beta. So when beta is very large, it means temperature is very small. It's w that dominates the question. And so you want to be quite ordered. And when beta is very small, which means temperature is very large, then this term sort of disappears. And you converge to the Poisson point process. So it fits well the intuition. As you have more temperature, you have more disorder. As you have less temperature, you have more order. OK, and so I already talked about the corollary for the known processes. Another corollary is that if you have an LDP of this formula, it goes together with an expansion of log z. So now we have a next order expansion of log z. So there are these leading order terms, IV of mu v. So if you're in dimension larger than 3, if you scale it properly, it should come like this. But whatever, you can take of n squared. Then you have these n log n terms. So there's a beta everywhere. Then there is beta n over d log n, which is there only in the logarithmic cases. And now we know what's next. There is n mean of f beta. So we have the existence of what you can call a thermodynamic limit, which means the sort of energy per number of particles, free energy per number of particles. OK, so the minimum of f beta, it's an unknown quantity. But at least it exists. And if you're in dimension 1 or 2, there's something nice that happens because you may remember, I told you there's a scaling formula for w. So if you look at how things scale with respect to the equilibrium measure, so these are all integrals, right? Integrals over x, blah, blah, blah. So this, we know how it scales. And the entropy term also, we know how it scales. So you can separate the minimum of f beta into something that's explicit in terms of mu v and something that's independent of mu v. And I will write the formula for you. So it's essentially the entropy of mu v with a constant. So it's minus n 1 minus beta over 2d integral of mu v log mu v plus a constant, which this time is independent of v. So this is what I had announced a little bit yesterday and the day before. I had said, we know a next-order expansion of log z, and we know the dependence in v and in the equilibrium measure. It comes here. It's actually just a consequence of how things scale. So when you rescale, yes, it comes out. But at the beginning, it depends on mu v. And then you rescale everything. It's just the Poisson with intensity 1, and it goes into the c beta. So c beta is the minimum. If you want, c beta is the minimum that you get when you normalize everything to have intensity 1. All right, so in dimensions 1 and 2, of course, such an expansion was known. I mentioned yesterday the results of Charbina, Boroglione, et cetera. So they have stronger assumptions on v, though. So they usually assume a lot of regularity. What is nice here is that, OK, it can work for critical cases and things like that. And also, it works in any dimension, so that's new. All right, so I want to just come back to a little bit about the difficulties of the proof. So I said, in order to get an upper bound, things are OK because we have a lower bound for fn. And then it's about plugging in this Sanoff type theorem for computing the entropy. If you want to go the other way, the lower bound, that's much harder. The hard part of the proof is there. If you want to prove the lower bound, it means you need to have upper bounds on the energy. And you have to have enough volume of configurations for which there is a good upper bound. So basically, what you have to show is that among all the configurations that you could draw at random, a dominant fraction of them, logarithmically dominant fraction, has good energy. It's an energy which is well-bounded above. But you may remember, if you have two points that get very close, already the energy goes to infinity, so you're dead. So what you have to do is you have to take these configurations that are given to you at random. Sanoff's theorem allows you to estimate their volume. And you have to work with those to make configurations that have a good upper bound on the energy. But by good upper bound, I mean for which this thing is essentially bounded above by this. And it's not true for all configurations, so you have to modify them. You have to take them and modify them, basically regularizing them, taking apart points that are too close together, while at the same time showing that this modification does not make you lose too much volume of configurations. So this is the task. And in order to accomplish this task and to show that you can really bound the energy by this limiting energy, there is a crucial tool, which is what we've been using all along for obtaining upper bounds, which is this tool of screening. So maybe I will finish with that. I have five, 10 minutes. OK. So the final word on this is that the way of computing the energy with these quantities of this form is convenient because you can make the energy additive over boxes. So you can separate things into boxes. This is what we do. We have this sigma. So you blow it up. And then you want to cut it into boxes that are of size R when you have n to the 1 over d sigma, the blown up set. You make boxes that are size R, which is a large number, but not depending on n. And so in each box, you expect to have many particles, but not depending on n. And the energy here, you will be able to compute it as a sum of energies on these boxes provided that you satisfy a certain compatibility condition. So the idea is that you want to solve for equations of this form. So let's say minus 1. And you also want to make the normal derivative on the boundary to be equal to 0. So let's say in a cube. Imagine I can solve this in a cube. And then I estimate the energy, the truncated energy that goes with it. And imagine I have it now in another cube. So I have this in one cube. I have the solution of an equation of the same form in another cube. So I call E1, E1 like electric field, the gradient of H1. So this is the guy in the first cube, H1 in the second cube, H2. So I have E1 is the gradient of H1. I have E2, which is the gradient of H2. If I look at the functions H1 and H2, I cannot glue them together to make a function H because typically it will be discontinuous on the interface. So it will not make a good potential. However, I can glue together E1 and E2. So I can take E1, characteristic function of chi1, and E2, characteristic function of cube 2. Because E1 dotted with the normal, you see, on this boundary, E1 dotted with the normal will be 0. And on this boundary, E2 dotted with the normal will be equal to 0. And when you glue them together, the divergence of this guy is going to be the sum of the divergences. You may know this fact that if you have a vector field on two sides of an interface, the divergence that's created across the interface is the jump in the normal derivative. This is something you see by integration by parts. And so here I have built stuff in such a way that there is no jump of the normal derivative. So if I put together these vector fields, the divergence of the union, or if you want, of the glue, the vector fields, is the sum of the divergences. And there's nothing created on the interface. And so you will get something in this way that still solves a relation of the form sum of Dirac minus 1. So you can glue together many things like that. And you will get a global vector field for which you have a relation of the form divergence E equals sum of Dirac minus 1. So what you have lost by this procedure, you have lost the fact that E is a gradient. The vector field on each piece was a gradient, but when you put them together, it's no longer a gradient. So you create a big vector field, satisfies this. And now the point is that the true energy associated with the configuration here is always less than the energy of this vector field. That's because of the projection theorem. So you may know this Helmholtz-Holtz or whatever projection theorem. If you take a vector field and you project it onto gradients, you decrease the L2 norm. And this vector field must be the projection onto gradients of this one. So you must have decreased the L2 norm. In other ways, another way of saying this, by relaxing the condition to be a gradient, you allow yourself to paste together vector fields and to estimate the total energy as the energy of this guy that's made by gluing things together. And so it means you can estimate the total energy as bound or the above, and this is what you want to bound from above, by the sum of the energies in the cubes. What this means somehow is that by enforcing this condition on the boundary, you have made the cubes independent. They don't interact anymore. So you have sort of screened each cube. That's why it's called screen. And once the cubes are made independent, it's much easier to sum energies in this way. Now the difficulty is that if you're given a configuration in a cube, in general, the potential that comes with it does not satisfy this relation. In fact, for it to satisfy this configuration, a necessary condition is that the number of points, the integral of this has to be 0, because that's that Stokes theorem. This thing has to integrate to 0, because the integral of this is the integral of that. So it means the box has to be neutral. The number of points in the box has to be exactly the volume of the box. So if you're given a configuration, typically it's not true that the number of points in the box is equal to the volume of the box. And so what we devise is a procedure that will take a configuration for you in a box and that will modify it in a relatively thin layer near the boundary in order to make it like that, to screen it. So it will not change the configuration inside the cube, but it will change it at the margin near the boundary. And it will change it in such a way that it becomes neutral. And the energy has not been increased too much. So you have not increased the energy too much. So this is the procedure that allows you to make all the proofs. And in addition, because you only change the configurations that are near the boundary, you modify only a small fraction. And so you don't lose volumes. It was all about not losing volume when computing volumes for the entropy. So this is sort of the ingredient that goes for the lower bound in the probability here. OK? So I can stop. Thank you. Yeah, so these have to be handled separately, right? So there is a boundary layer. You cannot tile your set exactly by cube. So there's going to be a boundary layer here. This boundary layer is negligible with respect to the volume, right? So no, no, it doesn't contribute. So you're going to basically, you're going to discard the configuration, right? You're given a configuration. You're going to just keep what's happening in those boxes. And what's near the boundary, you just discard it, and you replace it by something you make by hand. And so because you make it by hand, you can control its energy. You make it very neutral. And it's going to contribute a fraction of energy, which is exactly proportional to the area of this region. You see, because once you have made all the boxes neutral, what remains to be put here is also neutral. And so you can do it by hand. It's a lot tedious, but it's fine. So there are microscopic boxes. So they are, in original scale, they are r to the n to the minus 1 over d with r large. So they are large microscopy. Ah, yes, so I assume, yeah. So this is what I discussed in the beginning. You have the equilibrium measure, and you make assumptions that the boundary is nice. So precisely because you want to be able to control these sorts of boundary layers. Again, so you take your domain, say it's a disk or whatever. You tile it with microscopic boxes. And so there is a region that's of size, say, r and to the minus 1 over d, which is not tiled. A single box. Oh, you were talking about that picture. Ah, yes, so you're asking about the size of this layer. So near the boundary, you erase the configuration that you had. And you build one by hand, which is essentially there to absorb the boundary data. So you're going to do a sort of mean value argument to find a good boundary. So you see, this thing is like a boundary on which the energy is not too large. This integral of grad h squared on the boundary of this set is made to be not too large by a mean value property. You can find a good cube like that. OK, and then in this layer, you're going to put boxes like this and several layers of boxes. And you're given this sort of boundary data, dhd new here. And you can decide how many points to put in each box in such a way to absorb the effect of this boundary data. And then you use elliptic regularity estimates to compute the energy that this costs. And the energy that it costs is going to be related to that thing. But that thing, you see, you can make it smaller than the original energy divided by r roughly by your mean value property. So you can make it negligible. And then it's all about making cubes, putting one point in each cube, and putting suitable boundary data on there. And then you estimate the energy of that. So you just completely throw out the configuration that you had. And by putting these points by hand, you will make the thing neutral in the end. The procedure is devised to make the thing neutral. And then you can allow the points to move in a little ball because you don't want to lose volume. So you want to still have a little bit of volume of configuration, so you let your points move. So there will be well-separated points. And you let them move a little bit. And what you do in this layer here is completely similar. You put points in boxes, and you let them move a little bit. Each box is neutral. The energy is just additive in the number of boxes. And this is why in the end you see everything scales like n, which is the volume of the blown up guy. So it's just one box is neutral, one box costs one. And you expect the energy to be proportional to the number of particles that way.