 seconds to thank the organizers for inviting me and for putting together this really nice looking summer program. All right, so I'm going to be interested in systems of points or particles x1, xn that live in space rd and interact with an interaction function of this form and energy of this form the sum of sort of pair potential terms plus n times sum of v of xi from 1 to n. Okay, so there is a here some sort of external potential or field you can call it a confining potential it's confining because it grows sufficiently fast at infinity and this thing is the interaction and g has some very specific forms so I'm going to consider three cases either g is minus log of the distance and we're in dimension one or g is minus log of the distance and we're in dimension two or g is 1 over x to the d minus two and we're in dimension larger or equal to three. Okay, so the first two cases I will call the logarithmic cases for obvious reasons and you can see that this 2d logarithmic case and the higher d case here are what correspond to Coulomb cases or to a Coulomb gas because well simply minus log is up to a constant the Coulomb kernel in 2d and 1 over x to the d minus 2 is the Coulomb kernel in 3 and higher d. Okay, so the interaction between the particles is specifically Coulombic and in dimension two it's something similar but a little bit different the log is not the Coulomb kernel but it's a bit like as if you had a 2d Coulomb interaction and you constrained yourself to live on the line. Okay, so we can be interested in just minimizing these functions hn and letting n of course go to infinity or we can be interested in looking at the situation with temperature and then you form the Gibbs measure pn beta so beta would be an inverse temperature and you would form the Gibbs measure dpn beta x1 xn equals 1 over zn beta exponential minus so for normalization here I put minus beta over 2 hn dx1 dxn. Okay, so this is called the Gibbs measure it tells you the essentially the probability of seeing a configuration at x1 xn if you are at temperature beta. So how are these things related to random matrices? Well they are via things that were discussed actually already today. There are these very classical and fundamental models of random matrix theory the GOE the GUE and the Ginibe ensemble so here I mean complex Ginibe. So these are ensembles where you take n by n matrix and you draw the entries at random to be iid Gaussian and in the GOE you ask them to be real symmetric so you have the matrix to be real symmetric in addition and in the GUE it's complex Hermitian in the Ginibe ensemble it's just it's just complex Gaussian with no particular constraint so it's a little bit different from what Jan Feodoroff was discussing he was discussing real entries here it's complex entries okay and then you compute the law of the eigenvalues for these very specific ensembles and you find that it has exactly this form so it's exactly of this type of form for here of course dimension one situation because the eigenvalues are on the real line so this corresponds to dimension one beta equals one and v quadratic maybe it's x square over two I never remember and this one is still a law for dimension one beta equals two and v quadratic and this one is dimension two beta equals two and v is mod x square okay so the this corresponds to the logarithmic cases only and two quadratic confinement potential so this is really the model case of what we want to understand but we are also interested in looking at more general situations where beta can be different from these values and where v can also be more general and so that's where when we call these things Coulomb gas or log gas and in particular we're interested in understanding how much of the behavior really depends on v how it depends on beta etc okay so this is one possible motivation random matrix theory but you can find many motivations that are outside of that for Coulomb systems in general just as a statistical mechanics ensemble for the fractional quantum hall effect for vertices in superconductors which are also systems in which you see points that interact logarithmically so in a way you can see it the other way you can say eigenvalues of random matrices they tend to repel each other logarithmically like Coulomb particles another motivation is maybe less known in this audience it's approximation theory and fakete points so when you want to numerically integrate a function so let's say you're on a surface or something like that you want to numerically approximate the integral of a function by by a Riemann sum by summing the function of points that are well chosen well the best way to choose your points is to minimize this type of energies basically you want to minimize let's say you're on a surface you want to minimize the product of the distances between the points because that's what produces the smallest interpolation error and minimizing sorry you want to maximize this and maximizing this is the same as minimizing minus sum of log xi minus xj and then doing it with adding these an external potential here would correspond to looking at fakete points with weights so this is also very much studied in the approximation theory literature and as a result they're also interested in looking at more general interactions but so this is this is a motivation for just looking at minimizing such things and something I want to mention right away you can if you have the notes in your computers or somewhere you can you can look at pictures when you look at vortices and superconductors which you can show or essentially their locations minimize an interaction like this in the 2d log case then you see that vortices and superconductors form triangular lattices so in experiments you will see the vortices that are densely packed and they arrange themselves according to what looks at least to the eye typically like a perfect triangular lattice so you make equilateral triangles and if you look at simulations on fakete points on surfaces they also seem to form triangular lattices and so this is part of the question is to understand why such things come up why they come up at least in in minimizers of these types of energy okay so these are the motivations so now let's start let me start by recalling some relatively classical facts about these things so when you look at configurations like this you might want to form what is called the empirical measure or the spectral measure if you're in looking at random matrices which is just the probability density that you obtain by summing the Dirac masses at the points of the configuration okay so this is a probability measure and you want to you know typically understand its limit points as n goes to infinity and if you do that at least formally you can guess that this is related to the following function or functional i v of mu which is nothing else than the continuum version of the interaction energy so I would look at this v of x d mu of x over mu probability measures on r n and it's not too difficult to guess that somehow h n normalized by n squared is going to be behaving like i v of mu okay so if mu n empirical converges to mu then you expect this or at least maybe an inequality in this direction is true all right so this can actually be made rigorous and so if you want to see that you may want to rewrite this sum as an integral against Dirac masses so you might want to write g of x minus y Dirac at x i of x Dirac at x j of y etc so there is a little bit of care to be to be paid attention to be paid in the fact that here you have to remove diagonal terms which mean terms for which i equals j for which the the energy would be infinite right there is no g of x i minus x i because all these for all these potentials g of zero is zero is infinite and here when you look at this continuum version somehow this has disappeared the diagonal is back in the in the integral however this this is okay and what is well known is that okay this is roughly true minimizers of h n do converge to minimizers of i v and I will say better I will say to the minimizer of i v among probabilities call it p and this is called the equilibrium measure and I will denote it by mu v okay so it's also called the frostman equilibrium measure so this problem of minimizing i v is a is an old problem the beginning of potential theory and it's easy to see that there is a unique minimizer because i v strictly convex so you can check there's a unique minimizer from probability measures and it has a characterization which you find by trying variations right if you know it's a minimizer you can try a convex combination of mu v and any other probability right this thing will have larger energy than mu v for every t in zero one for every new probability measure okay so try that expand in t let t go to zero and you will obtain some equations which are essentially like the Euler Lagrange equation for the minimization of i v and the result is the following so assume that v grows sufficiently fast at infinity it's lower semi continuous it's good enough then mu v is uniquely characterized by the following relation the fact that h mu v I will define what this means plus v over two is bigger than a constant everywhere and is equal to that same constant in the support of mu v so there exists a certain constancy such that this is true and here with this notation h mu for any measure mu I mean the integral against the kernel g for the convolution with g of mu okay so you can think in terms of electrostatics mu is a distribution of charges it's a probability density and if you integrate g of x minus y d mu you are forming the electrostatic potential generated by mu the Coulomb potential generated by mu if you're in the Coulomb cases okay and so this thing is telling you that the Coulomb potential plus the external field has to be constant in the support of the equilibrium measure so it's essentially a variant of the capacitor problem but with external field and this is what you obtain when you make the variations I described above okay any question yeah it can have several connected components yeah it's possible now it will be the same constant so the only thing you can prove is that if v grows sufficiently fast that infinity mu v has compact support so it's a compactly supported measure but it could have several connected components it depends on v so I will be interested I will restrict myself only to situations where where mu v is nice at least where the support of mu v is fairly nice it's a nice set with a nice boundary it could have several connected components they say mu v has compact support and so if you know that that what you expect is for nice cases that mu v has a density so it's not only a probability measure but it's a probability with the density and I will confuse the density and the probability and and so if it has a density and we're in the support here we have this relation right that h mu v plus v over 2 is constant in the support and now let's restrict ourselves to the Coulomb situations so let me exclude the first case if I'm in the Coulomb situation then what I know is that g is the Coulomb kernel right so for Coulomb what is the definition of being the Coulomb kernel where it's the same as saying that you solve minus Laplacian g equals the Dirac mass right Dirac at the origin and of course it comes with a constant because I didn't normalize my my Coulomb kernel properly but so let's call cd it depends on dimension this constant so in Coulomb cases you have this relation and in particular if you form the potential generated by a measure mu if you compute its Laplacian why it has to be mu up to the constants is if you compute the Laplacian you can take the Laplacian inside the integral you find the Laplacian of g you get the Dirac mass and so what you find in the same way is that minus Laplacian h mu is cd times mu okay so now I can look in the support of the equilibrium measure I can compute the Laplacian of this relation if you assume that the support is a nice set with an interior I find that in the interior minus Laplacian h mu v minus Laplacian v over 2 has to be the Laplacian of the constant which is 0 okay and so that means what that means cd times mu v is Laplacian v over 2 and so I have a formula for the density of the equilibrium measure and of course I should multiply by the characteristic function or the indicator function of the support okay so I will denote here from now on sigma the support of the equilibrium measure okay so it's a little calculation it tells you that if you have a if you know that you have a nice support and with an interior if you have a regular enough potential v of course here I'm taking the Laplacian of v so I would prefer v to be say c2 then I have a formula but the support remains unknown it's not enough to have this you still don't know what sigma is and you cannot find it other than from these sort of implicit relations yeah it's always compact the support is compact the problem is that mu v could have I don't know it could be like a a singular you know a probability on a segment or something like that it could have a it could not be absolutely continuous with respect to the Lebesgue measure and then you would not be able to write something like that and this could happen but I rule out these situations okay I'm trying to be in the good situation so if in fact so there are cases where you can explicitly compute the equilibrium measure and this is this is enlightening the simplest case is to take v quadratic or in fact v radial would be good already because if v is radial then you can expect by symmetry you can expect that the equilibrium measure would also be radial and then you can sort of compute but let's put ourselves in a situation that's even easier so if you take v of x equals x squared if you're in a Coulomb case then mu v will be well the Laplacian of the quadratic function is a constant so mu v will be some constant and the support of the equilibrium measure well by symmetry it has to be a disc right so it's a characteristic function of a disc then you just have to compute the radius and you have to compute the constant that makes this a probability or or rather you you write this okay so this would be a 1 over cd I think and then you find the right radius that makes this a probability measure and so if you remember the Gini-Brand-Sambol was corresponding to v quadratic and here you find a characteristic function of a disc and that corresponds to the what's called the circle law but for now I haven't said anything about the situation with temperature I'm only looking at minimizers but in a minute I will say that this is somehow still true the situation with temperature another example that's fundamental especially for today is if I take v equals x squared dimension one logarithmic so then I cannot use this formalism of uh because I'm not in the Coulomb setting however I can tell you that mu v can be computed and it's identified to be the semi-circle law okay so it's square root of four minus x square with the proper constant in front and so that's the semi-circle law that Johanna Dumitriou was talking about this morning and and it fits with what I was describing at the beginning because the GOE and GUE ensembles correspond to this type of potential okay so now we know what minimizers look like right so this this tells us that remember the result is that the empirical measure is going to converge to mu v so for minimizers one over n converges to mu v for minimizers okay so I expect that if I have say my circle law always think of the quadratic case as the model it's a it's a good model I expect I'm gonna have particles you know fairly well distributed in such a way that their their distribution becomes uniform in this disk there might be a few points outside but not too many it's going to look like that so this is the first good information on the behavior now there is this very nice theorem that tells you that this is not only true for minimizers it's also true in fact for configuration with temperature and this is phrased in terms of a large deviation principle which tells you that you remember the the Gibbs measure the situation with temperature that I raised before but it's this so by the way I didn't say but z n beta what is it is just a constant which is the normalization so that this is a probability measure okay so you want this to integrate to one it means you have to divide by an appropriate constant which is actually not so well known and the study of that constant is of interest on its own but okay so this probability density now admits an LDP at speed n squared and rate function i v minus i v of mu v okay I will explain now of course this is the same as mean i v as we have seen okay so what does it mean to have a large deviation principle blah blah blah if you want a formal definition it's written in the notes but an informal definition is that the probability that the empirical measure is in a certain subset of the space of probability measures is roughly behaving like exponential minus beta n squared the minimum over a of i v minus i v of mu v okay so I'm there's a discussion here of closure and interior but this is the rough statement okay so now let's look at this the exponential here look at the exponent is always non-negative here this is non-negative so you have exponential of a negative number it's always less than one which is good it's a probability and you will see that this exponential is actually decreasing very fast as n goes to infinity so it's exponentially decreasing as soon as this is positive okay so this probability it tends to zero exponentially fast except if this thing happens to be zero which means if the mean of i v over a is equal to the mean of i v okay it means the probability tends to zero except if the set a contains mu v because mu v is the only minimizer okay so this goes to zero except if mu v is contained in a and now imagine that I take a to be a smaller and smaller neighborhood of the limit of the empirical measures right so if one over n converges to some mu and if mu is different from mu v essentially this is telling me that the probability of such events goes to zero in words the only configurations that are observed that have non-zero probability are those that converge to mu v another way of saying this is this converges to mu v except with very small probability and so you see it means that typical configurations even with temperature they they will do the same as what the minimizers do they will look like mu v yes i v is the functional I defined before that I erased it's a probability measure okay mu in a i v of mu right is it cleaner is it better