 much for coming all the way to the last lecture. So today, I want to talk about the following question. So you remember, we have the support of the equilibrium measure, and we have determined the limit of the empirical measures by this large deviation principle to be essentially the equilibrium measure. So you're going to have n points quite densely packed in some bounded set. And so what we want to do now is zoom. So let's say I take an origin here, x. And I want to zoom my configuration by multiplying everything by n to the 1 over d, which is the scale I expect I need to zoom by. So after zooming, I will get a configuration of points which are well separated now. What is their density? Well, if I zoom near mu, near x, the density, meaning the average number of points per unit volume, of course, is mu v at x. I assume that mu v is a regular measure. And so when n goes to infinity, I'm going to get an infinite configuration of points in the whole space. And in fact, if we are looking at a situation with temperature, this configuration of points is random. And so that makes what's called a point process, a random point process. So it turns out that if you're in dimension 1, the limiting point processes obtained from the log gases are understood. They are characterized. They are called the sine beta processes. So characterized by Valco and Virag. If beta happens to be 2, then it's a determinant point process, and you can compute everything by a kernel. But in dimension larger than 1, there's only one case where it's known, where the limiting point process is known is d equals 2, beta equals 2. Essentially, in v quadratic, let's say. Because then the limiting point process is what's called the geniebre point process. So it's associated to the geniebre ensemble. And it's also determinant all. There's an explicit kernel. You can compute sort of everything. But beyond those two cases, those two situations, the bare existence of the limiting process is not known. So nobody knows. OK, so what I would like to do now is introduce an object that will describe these configurations that will essentially compute their energy. And instead of obtaining a large deviations principle on the empirical measure, we're going to work on what's called the empirical fields or what's called a next-order large deviation principle. So let me first define my quantity. So remember how we were working with a next-order energy, f and mu, which was defined like this with truncations, minus cd sum of g of eta i. And we had these equations characterizing the electric potential generated by the configuration. So now let me zoom everything by n to the 1 over d as I announced. So when you zoom things, you will get a blown up potential. And it will solve an equation of this form, minus Laplacian h. Sorry, so there is, if I put eta, I should truncate here. Minus Laplacian h equals cd. So the sum of diracs is going to transfer into a sum of diracs at the zoomed points. And this thing is going to become mu v at x plus 1 over n to the 1 over d. So this is the equation you obtained for the blown up potential. And so here I'm using primes to denote a blow up. So xi prime is just xi multiplied by n to the 1 over d. And hn prime is the proper rescaling of hn. So now if I take limits as n goes to infinity of such an equation, so I expect my sum of dirac masses to converge to a sum of fixed dirac masses. But this time it's infinite. xi is infinite. So this is as n goes to infinity. So I will identify, so xi like configuration. So configuration is the sum of diracs. And I identify the configuration and the sum of diracs. So this I call xi. So xi is either a collection of points or the sum of diracs at those points. And if you take the limit of this formally, you will get the limit. So let's say it would be h infinity. We satisfy something like that. So sum of diracs on the configuration minus, well, the limit of this guy will be just mu v at the blow up point. So this is the blow up center. So this is a constant called this m. It's just the memory of the density at the point where you zoomed. So when we rescale everything, we find ourselves with an equation of this form, a sum of diracs minus a constant. If you're in dimension 1, you remember you have to multiply by the dirac mass on the real line. OK, I will omit this. So there is an analog in dimension 1. And so we have a system which is an infinite configuration of points of dirac charges and a uniform negative background charge minus m. Physicists call a system like this a helium. So it's a system which is sort of globally neutral. And so now we can define the energy of the helium inspired by the definition or the computation that was made here. We define it as, OK, if I have an h like this, I will take the limit or the lim soup or whatever as r goes to infinity of, so I will take cubes of size r, k r, normalized by the cube. And look at how much energy is in the cube. So I have to do a double limit. So I have to take the limit as eta goes to 0. The energy of the truncated version of h in the cube minus cd times m times g of eta. And this whole thing is the energy. And then I take w for the configuration c to be the infimum over all possible choices which are compatible with the configuration. So I will slow down a little bit. OK, so what I'm doing here is I'm just computing the energy of this guy. So formally, I want to take the l2 norm of the gradient in a cube of size r, take a big cube. I look at how much energy of the potential there is in that cube. But if you remember, this thing needs to be computed with a truncation, otherwise it's infinite. So the eta is there to truncate and then you subtract off the divergent part. The m here is there because this is the number of particles per unit volume. So this is the natural thing to put when you want to subtract g of eta. Here you have to subtract the per number of particles. So this should be, if I write it this way, I should put m times the volume. This is the expected number of particles in the box. So this is the energy in the cube. And then I compute the energy per unit volume. And I let the size of the cube go to infinity. Question? Yeah. Is c a function? For any configuration, it's a function of c. For now, I define a function of c. OK, so I have these infinite configurations with the neutral background. Let me call it c and m. So I remember the background. And here I have to take this infimum because, you see, if I have a solution to this equation, I, in fact, have many solutions. I can add any harmonic function. It will also be a solution. But then I can, so I take all the possible potentials that are compatible. Compatible means you solve this thing. You take all the possible potentials and I take the infimum of these energies. So in fact, there is one that achieves the minimum. So you have to subtract mod out by these harmonic functions that represent the far field, in fact. OK, so I define this thing. Does it mean anything? That's a question, right? The claim is that this is a way of computing, if you want. Essentially, you would want to compute the sum of pairwise energies, or let's say, g of x minus y, sum of Dirac minus m of x, sum of Dirac minus m of y, double integral in the complement of the diagonal and in a cube of size r. This is roughly what you're trying to do. But it's not at all obvious that these two are the same. And in fact, in general, there are not. There are some conditions under which there are. But this is an object that actually sort of achieves this. But it's replacing the computation of pairwise interactions in a cube by something based on this potential energy, which turns out to be more convenient to work with. OK, so can we compute this quantity? Well, there are cases where you can compute more explicitly. So there exists an explicit. What I mean by explicit is the function of the point, something more explicit in terms of the point. There exists an explicit form if the configuration is periodic, for example. So if you take a box, you put any configuration there. Repeat it periodically. You can compute w in terms of the Green's function of the underlying torus. So if you have a periodic configuration, it means you have a torus. So you can write it like this. It will look like this. For g, which is now not the Coulomb kernel, but the sort of periodized Coulomb kernel, the Coulomb kernel of the torus. So there's something explicit. And now we get to this question. Why do we get to the question? Because in the end, I'm going to prove that this w is actually the correct limit. So it will actually be the correct limit of the next order energy. And so if you're interested in minimizers, for example, you will want to minimize this w. And so we have the question, what is the minimum? What is mean of w? Which configurations achieve it? And we have only very few answers. So in 1D, the minimum is achieved. You could guess this. At the configuration, which is the proper rescaling of the lattice z. So you put the points completely regularly. So you have a logarithmic interaction of points with a neutral background. They choose to place themselves periodically in order to minimize their interaction. OK, beyond that, we don't have other results. The only partial result is in 2D. Partial result is in 2D. And in 2D, we only know what is the minimizer among lattices. So if you take a configuration, which is already on a perfect lattice. So not only you assume periodicity, but you assume that it's exactly a lattice. Then there is a theorem that says the minimum, so it's called a Bravais lattice, of w over lattices of fixed volume. So let's say volume 1 is uniquely achieved at the triangular lattice. So if you have a configuration which is already a lattice, the volume is fixed. You have only two parameters that you can vary. You can vary the size of one of them and the angle. You can look at rectangular things like this. All right, so this, how does it go? First you use the explicit form in terms of Green's functions of a torus. This Green's function, in turn, you can express it with modular functions, modular forms. And then you look in the literature, and there's theorems from the 50s in number theory that say that the triangular lattice is the best. OK, so this means that this function w, at least it's good for something it can distinguish between two different microscopic configurations, such as two lattices. And it tells you that the best is the triangular. Triangular means you make equilateral triangles. And remember, this is the lattice that is observed in superconductivity, vortices in superconductors form these triangular lattices, which are called abricosoph lattices in physics. So abricosoph got the Nobel Prize for predicting this. And so it's kind of consistent with what we obtained, because we can prove that the energy from superconductivity should take minimizers. You have to converge to minimizers of this function. And then you're left wondering, of course, is this really the minimum over all possible configurations? And this is an open question, maybe a conjecture. In 2D, the minimum is achieved at the triangular lattice. What about higher dimensions? The situation is even worse in higher dimensions, because in dimension three, say, the minimizer among lattices is not even known. There's a conjecture that is the BCC lattice, the body centered cubic lattice. But the corresponding result from number theory is an open question. So let alone, of course, knowing which one is the minimizer among all configurations, there are some exotic things that happen in dimensions eight and 24. There are some special lattices. And in dimensions eight and 24, they can prove that those lattices are the best among lattices. OK, there's other things that happen in dimensions eight and 24. In large dimensions, you should not expect a lattice. That's a little bit surprising, but in large dimensions, the lattices somehow are too sparse. There's too much space. And there's better ways of putting configurations. Of course, all of this is completely not proven. OK, so the way I defined it, it's not unique, because you can always perturb your configuration in a compact set, and it will not be felt. If you make a compact perturbation, because I take limits over a large box. But OK, if you mark that, or if you assume that if you look at the function on point processes and you assume they are stationary, then you should expect a uniqueness. OK, so in 1D, it's proven. And 1D is the only case where we know the minimizer. So in 1D, if you look at the minimizer among stationary processes, it's uniquely achieved at the suitable stationarization of the lattice. So this is a result that was proven by Thomas Aloublay. By the way, in 1D, you don't have to be logarithmic. The interaction doesn't have to be logarithmic. This thing is true for all interactions of the form 1 over x minus y to the power s. You can prove that the lattice is the best. So it's not at all specific to the log. And in dimension 2, also, there's triangular lattice. Since I'm talking, I think it's fun, this topic. But the triangular lattice is expected to be the optimal for wide class of interactions. It's not specific to Coulomb either. There's some conjecture by Cohn and Kumar that the triangular lattice is universally optimizing. So it's the optimizer for a certain class of interaction energies. Is there a physics motivation for looking at high dimensions, or it's just a matter of problem? Physicists seem to be interested in it. I mean, I went to physics conferences where they were talking about it. But I think the main motivation is for approximation theory. Because when they have signals, they are in large dimensions, and they like to pack the space by spheres. And so people are interested. OK. All right, so there's one more thing I want to say. It's that there is a scaling relation. And maybe I'll take my notes in order not to make a mistake. You remember how there is this parameter m that corresponds to the density. So it's very easy to rescale things. If you have a configuration with density m, you can zoom it, blow it up, or blow it down to make it density 1. It's not going to be very difficult. And so we have a scaling relation, which is this. It will be a bit useful later. So there's the properly rescaled configuration with background 1 minus 2 pi over dm log m that's in the logarithmic cases. And if you're in the Coulomb case in dimension larger than 3, there's some factors that's multiplicative this time. So that's the rescaled configuration. OK, so dimension 2, logarithmic 1 and 2. And Coulomb dimension 3 behave differently with rescaling, and that's actually something interesting. All right. So now we have a definition for a fixed configuration. What are we going to do for our random configurations? We're going to form. So we're going to define p, so let's say px. So remember, I have a blow up center. I call it x. So px is going to be the point process that I see when I center at x. So let's call it pxn. If I have a configuration x1, xn, it's going to be the Dirac, so-called this xn with a vector. And the zoomed guy is going to be xn prime. So it's n to the 1 over d times xn. So I'm going to take a Dirac at the configuration xn prime, rescaled or shifted to be centered at x. So I want to shift by n to the 1 over dx. OK, so x lives in the original set sigma. I zoom everything. It becomes n to the 1 over dx. I recenter everything near that guy. I form the Dirac mass at this. It gives me a point process. So this is the point process centered at x. And then I'm going to form pn, which is roughly, let's say pn bar, which is essentially a formula. It's the integral of this pn x dx. So I integrate over sigma. I normalize by the volume in order to make it a probability. And this is now a probability on, so maybe I would tell you how it acts on test functions. So I glue together all the px. And this becomes what we call a tagged point process, because there is a tag, which is x, which is just the memory of where you centered. So if I integrate a certain test function of point and configuration against dpn x, or against dpn, sorry, then it's the same as integrating over x f of x c dpn x of c. So this guy is a probability on configuration. OK, so this pn bar, this is what we call the empirical field. So it's the guy that encodes all the blow-ups of your configurations with all the recentering. And so now we can define an energy w for these empirical fields as follows. So w bar of p bar is going to be simply the average of w of the configuration with background mu v of x integrated against dpx of c. OK, and then you average. So for each given x, you compute w, the energy of the configuration with the background that corresponds to that point. And then you integrate. So essentially, you're just going around the domain here and averaging all the energies of the point configurations that you see when you zoom around that point. So let's say, imagine that you have a situation where when you zoom, half of the time you see a triangular lattice and half of the time you see a square lattice, then this thing will be an average of half of the energy of the triangular and half of the energy of the square. It's the average. And I claim that this quantity is the right limiting quantity. So let me write it. So these pn bars, they're going to have a limit, typically up to, so you can prove that there is a tightness property, so up to extraction of a subsequence, the pn bar associated to a configuration will converge to a p bar. And so far, there's no probability. The configurations are given. I don't know yet by which process. And so I can define the w bar of the p bar. The bars stand for average. OK, so the claim is this. It's that the next order energy that we were looking at for the point configuration is bounded below. So if you normalize it by n, you get a lower bound from below by the energy of the limiting empirical field. So if, so there is a proposition. So I should mention that this energy w was first introduced in a work with Etienne Sandier. And then it was improved in a work with Nicolas Rougerie. And essentially, we already had this step of lower bound in there. OK, so the idea is that this w bar of p bar is a good candidate to be the limiting object to the energy. This is the energy after you subtract off these fixed dominant terms. And the idea is this is optimal. This is an inequality. It's essentially optimal. In what sense is it optimal? Well, it's optimal in the sense that if I give myself a p bar, I can reconstruct a configuration for which the inequality goes in the other direction. There exists good configurations for which this is matched. Of course, there's many configurations for which there is not equality. It suffices to take a configuration and let two points get very close to each other. Then the energy has to go to infinity. But this one doesn't have to go to infinity. So it's only for good configurations that there is equality. Yes. Yes. So once over sigma, once against p. Yes. Thanks. Yes, essentially yes. So this proof, it's not very hard. You have to look at the energy and observe that it can be rewritten as an average against these p n's. And essentially, the quantities are lower, semi-continuous. And you see, this is where it's very useful that w bar was defined the way it was, because it was exactly modeling the definition that's in fn. It was following that exactly. So then it's lower semi-continuity. It's just about finding the right objects to talk about. Proof is not very difficult. OK, so now we have this limiting guy. And as I said, it's optimal. And so if you're interested in minimizers of the original energy, this is the end of the story. Because if you're looking at minimizers, first you use the splitting formula. It takes out the leading order terms, n squared i v of mu v, possibly the n log n terms. And then you're left with this term. And now this thing is telling you I'm going to have to minimize w bar. And because I know how to build configurations for which there is equality, I can build configurations which achieve this. And this gives me an expansion of the minimum. And you have an expansion. And you know now that if you take minimizers, their empirical fields have to converge to minimizers of this w bar. And because this is an average of w, it means that roughly if you zoom for almost all the blow up centers, you should see a minimizer with respect to its suitable background. So the equilibrium measure dictates the background. It dictates the density of points. If you zoom, you should see most of the time a minimizer of w with that density. So now if we believe in the conjecture, for example, in 2D, this tells us that we expect to see almost all the time something that looks like a triangular lattice or something that has the energy of the triangular lattice. Of course, it can have many defects. This thing will not see them. It's too rough to see defects in a lattice. In 1D, it means that you expect to see configurations that are always parallel. So you expect these limiting point processes to be concentrated on lattices or on optimizers. All right, so now we want to look at the situation with temperature. So what do we have with temperature? Remember, we formed the Gibbs measure.