 Welcome back. So today I'll try to do some experiment. So half blackboard, half slides, some presentation to make it a bit more lively, I hope. Okay, so this morning we gave some general background on extreme value statistics and large variations. Now we are trying to capitalize on this to discuss the properties of the largest eigenvalue of matrices, in particular, shut up, and I mean it. Good. So the largest eigenvalue of Gaussian matrices. Okay. So what do you know about Gaussian matrices? Sorry? Yes, characterized by a certain index which is called beta. Right? Beta equals to 1, 2, or 4. What we know about the density of eigenvalues, they follow exactly. So Gaussian random matrices, the average density of eigenvalues in the large and limit follow this semicircular low. Do you remember the edge points of the semicircle? How do they scale? So root 2n, and then you have something else? You have a beta in here, right? So the edges of the semicircle will scale as root 2 beta n, which means that if you rescale all the eigenvalues by root beta n, okay, so you produce your random matrices, you collect the eigenvalues, and then you divide each eigenvalue by square root of beta n. In this case, the semicircle will run between minus root 2 and root 2. Do we agree on that? So the semicircle will not scale with n. If you take n equal to 50 or n equal to 100, after this rescaling, all the eigenvalues, all the histograms will collapse on this same universal curve. Good. So now if I ask you the question, what will be the typical value of the largest eigenvalue for a Gaussian matrix once we have done this rescaling? So typically, where do you expect the largest eigenvalue to sit? Root 2. Why? Yeah. So this is the edge of the semicircle. So it is in some sense to be expected that the largest eigenvalue will be situated around this position, right? Actually, we can prove this, but we will just assume that this is true. So the average of the largest eigenvalue of Gaussian matrices once this rescaling has taken place will be at root 2. Now what about fluctuations around these values? So for example, what about the standard deviation? Or yeah? No. No. What I mean with this is that you can compute the average value of the largest eigenvalue for finite n. So for a matrix that is of size 3 by 3, 5 by 5, 50 by 50, and then you take the limit n to infinity. So for any finite n, you have a non-zero probability of having your largest eigenvalue bigger than root 2. That's for sure. But in the limit n to infinity, this is the correct statement, if you want. Good. So what about the fluctuations of the largest eigenvalue? For example, what is the full distribution of the largest eigenvalue? Well, this problem was a very important and difficult one for a long time. Now I give you one result and then we will try to understand why this result is so important. So this result is due, well, the official date is 1994, although in 1992 we already knew quite a good deal of information about this problem. So if we write lambda max equal to root 2, which is its average value, plus a correction which is of order n to the minus 2 third, the same 2 third that we have seen in the context of May's model, chi of beta. So this is a random variable. We write this random variable as its average or more or typical value plus a fluctuation times another random variable, chi of beta, where beta can be 1, 2 or 4. Then the result of 1994 is that in the limit n to infinity, the limit n to infinity of the probability that chi beta is smaller or equal than s. So this is the cumulative distribution of this scale random variable in the limit n to infinity. This limit exists once the scaling has been performed and it is equal to f1 of s or f2 of s or f4 of s where these functions are non-trivial functions. Now I will give you the expressions of these functions. These functions are called tracy wedom distributions because we have three types of functions depending on beta. So this result was proven by these two gentlemen here, you can see, Craig Tracy and Harold Widom between 1992 and 1994, so the most complete version of their work appeared in 1994. So they managed to compute the limiting distribution of the largest eigenvalue of Gaussian matrices in this scaling limit. Just to make contact with the case of IID random variable, here root 2 and n to the minus 2 third over root 2 are the analog of the scaling constant an and bn in the case of the IID random variables. So you take your random variable, you bring it to the mean value, you scale it by the scale of typical fluctuation and what remains in the limit n to infinity has a non-trivial distribution which is n independent. This non-trivial distribution is given by these functions. The only problem here is that these functions are much more complicated even to write down and even to plot than the corresponding extreme value distributions for IID random variable. So if you compare the complexity of these objects with the Gambe, Fréchet or Weibull, these are incredibly more difficult. I will give you one example. Take for example f2 of s, so the Tracy Widom distribution for the Gaussian unitary ensemble beta equals to 2. So this has this expression. It is an exponential of a function which is constructed this way. It is minus the integral between s to infinity of x minus s q square of x dx where q of x satisfies a certain differential equation. So it satisfies the differential equation q to second derivative of x is equal to 2 q q x plus x q of x with some appropriate boundary conditions. So you see this type of object is very complicated even to write down and even to plot. Suppose that you want to plot this distribution, what you have to do? You have to solve a certain differential equation and pick up a particular solution of this differential equation satisfying some specific boundary condition and then you need to plug the solution of this differential equation inside an integral, perform an integral and then take the exponential. So once you have done this, you can plot your function. Now in the handout, I just reproduced some code that was available in some paper to actually perform this operation numerically. So in order to plot it, you need to do all these operations and on page 25, you get basically the result of this small code. This is not f2 of s, but it's the derivative. So it is the corresponding pdf. Clearly this one is a cumulative distribution. You can differentiate this and you will get these three curves here on page 25. Good. And so this equation here has a name. It is a nonlinear second order differential equation which goes under the name of Pen Levé 2 equation. I will not get too much into the details of this. Here we have a specific solution satisfying certain boundary conditions of a Pen Levé 2 equation. So this result was really a tour de force in mathematical physics. The proof of this result spans several pages. It is not a trivial result at all. Now if you want, we can show something here. So here is what I told you. There are also the other functions f1 and f4. I will make this presentation available for you so you can have all the information. And here you have basically upstairs we have f1, f2, and f4 in this form. So these are cumulative distributions. They start from zero and they saturate at one, even if this is absolutely not obvious from the expression. That this guy goes to one on the right and goes to zero on the left but it so happens. And this is the derivative of the three. So the corresponding PDF. And actually this is probably not interesting but we can show. So in a paper, in a numerical paper that I have probably linked in the handout, if not I can just give you exactly the reference. They actually put this result to the test. So they wrote a very efficient routine to compute the largest eigenvalue histogram, the histogram of the largest eigenvalue of Gaussian matrices. You compute the largest eigenvalue, you perform this operation. So you scale out root two and you divide by this number and then you plot a normalized histogram of the result. And actually you can see that the normalized and scaled eigenvalue for b-takers one, two, and four precisely match the solid line which is the Tracy Weedon PDF. So this result is true and you can test it numerically, even yourself, even this afternoon by just using Matlab. So this is done in Matlab. As you can see, these PDFs are interesting because they don't look like Gaussians at all. They have very highly asymmetric tails and they have this very funny and weird shape described by this extremely complicated expression. That's life. Okay, so why, well, okay, this result was very important in the Random Matrix community, but actually over time we started realizing that this result is really much deeper than it seems because these Tracy Weedon distributions keep appearing in so many completely unrelated problems. So although they were originally discovered in Random Matrix theory, they are appearing in several problems including combinatorial problems, problems with stochastic growth evolutions, problem in mesoscopic physics. There are several instances of problems that are completely unrelated where these complicated objects, which cannot appear by chance, actually crop up naturally. So what I wanted to do now is to describe one such problem in combinatorics where the Tracy Weedon distribution appears and where you will see that there is absolutely no link whatsoever with Random Matrix theory. Okay, so the problem, the combinatorial problem that I wanted to discuss is basically was solved after many, many years by these people in 1999 on the distribution of the length of the longest increasing subsequence of random permutations. So the title seems obscure, but actually the problem is very simple. It's a very simple combinatorial problem, so I will try to explain to you what this is. So you have a sequence of integers, in this case seven integers, and you have a preferred direction from left to right. So you should read this sequence from left to right as you would do normally, five, two, eight, three, four, ten, nine. Then out of this sequence of integer, you can isolate an increasing subsequence. So an increasing subsequence is a subsequence from left to right, which increases. So for example, three, four, nine is an increasing subsequence. Three, four, ten is an increasing subsequence. Two, eight is an increasing subsequence. This is a super simple definition. What you can do is then you can isolate the longest increasing subsequence. So for example, here you have two, three, four, nine is the longest increasing subsequence. It has length four, and you cannot find an increasing subsequence of length five or higher. It is possible that you find two longest increasing subsequences. This is possible or three or more, but the size of the longest increasing subsequence is clearly fixed because it is the longest. In this case, the size of the longest increasing subsequence is four. Good. So what, if I give you a sequence of integers, there is an algorithm which is called the patient sorting algorithm, which is inspired by the Solitaire card game, where you can, in the most efficient way, find out what the longest increasing subsequence is. Okay? So the algorithm is described as follows. You take your sequence above, and you start from the left. So you take this card, five, and you put it here. You put it down. Then you consider the second one, which is two, card two. The card two is as a smaller value than five, so you can put it on top of the previous pile. So you get five, two. Now you get eight. Eight is larger than two, right? So you need to form another pile next to it. And when you form another pile next to it, you draw a pointer, an arrow, from this pile to the top card of the previous pile. Does it make sense? So five, two, because two is smaller, so you put it on top, but the eight is larger, so you need to form another pile. So now you've got three. Where would you put three? You will put three on top of eight, right? And then, but, and you need to remember that you need to put an arrow pointing from this top card to the top card of the previous pile. Sorry? Well, that's how the algorithm works. You will see at the end when I do everything. So four, you do four. So four, you need to put another pile. So you can observe that at any stage of the algorithm, top cards of piles increase from left to right. Two, three, four. This is obvious, right? And then you do ten, and you put a pointer to the top card of the previous pile. And then nine. And you put a pointer. Now, look how many piles you have formed. Four. And four is the length of the longest increasing subsequence. Not only that, but you can reconstruct which one is the increasing subsequence by reading, you know, by following these arrows. For example, two, three, four, nine. You see? Okay? So this algorithm gives you automatically the length of the longest increasing subsequence and one sample, one example of the actual longest increasing subsequence. No, not all the sequences. Okay? But given the simplicity of the algorithm, we shouldn't complain, probably. We can be happy. Okay? Is the problem clear? Good. Now, what, here we don't have any randomness except for the fact that, well, the original sequence is not a specific sequence. I just, you know, I just made it up. So how can we make this problem more quantitative and introduce randomness? Well, what you can do is you take the ordered sequence of the first end integers. One, two, three, four, five, six, seven. And then what you can do is you can consider all the permutations of the string of the sequence of the first end integers. For example, this is one permutation, five, seven, four, one, six, three, two. This is another permutation, four, six, five, one, three, two, seven. And you consider all the n factorial permutation that you can describe. And now imagine that all the permutations have equal probability. So you have an ensemble of permutations which can occur with equal probability. And for each of these permutations, you compute the length of the longest increasing subsequence. So you do the patient sorting algorithm on this one, on this one, and on all the n factorial permutations that you have. So basically you get one number, the length of the longest increasing subsequence per permutation. You get n factorial numbers out of this algorithm. Right? And now you can ask, what is the distribution of these numbers? For example, here, I don't know, maybe the length of the longest increasing subsequence would be two. Maybe here it would be seven. Another one would be three. So how are these numbers distributed? How often do you get a longest increasing subsequence that is equal to one to two to three up to n? Right? If you put a uniform distribution on the permutation, this is a probabilistic question that you might ask. So how frequent is it that you get a certain length between one and n? Well, clearly here, we do have randomness. We don't have any reference to random matrices. Here, for example, I highlighted the longest increasing subsequences. So for example, in this case, we have five seven. That's it. So the length here is two. Here we have the longest increasing subsequences length three, four, five, seven. So we have two, three. Another one will have five, one, seven. How are these numbers distributed? If all the permutations are equally likely. Well, here in this table, for n, the length of the sequence equal to 15, I list the number. So here on this column, you have the number of sequences whose longest increasing subsequence is equal to one, is equal to two, is equal to three, is equal to 15. Okay? So what is the sum of all these numbers in this column? Yeah. So the sum of all these numbers is 15 factorial. Okay? Here we have one permutation which has longest increasing subsequence equal to one. Which permutation is this? Yeah. The one which is reversed. So you have 15, 14, 13, and then the length of the longest increase subsequence is one. The same thing here. We have only one permutation where the longest increasing subsequence is length 15, which is the identical permutation. One, two, three, four, five, up to 15. Then the length is 15. But these other numbers are non-trivial. For example, we have 196 permutations where the length of the longest increasing subsequence is 14. It means basically that you have swapped two numbers. Sorry? Yeah, exactly. For the others, it becomes a bit more intricate, right? But look at this. If you divide these numbers by 15 factorial, then you get the probability of basically having, for example, a certain length of the longest increasing subsequence equal to seven or eight, right? All you have to do is to divide the number by 15 factorial. Now, you see this shape here? Does it remind you of anything? You see? You see the shape of the numbers? It's like a sail. So this problem, so characterizing the distribution of this number has a long history. So the expected length for a subsequence of size n divided by root n. So it is expected that the expectation would grow with the square root of n. So this was conjectured by Ulam in 1961. And then there were, since then, a lot of different results. But the characterization of the full distribution was not known until the work I showed you in 1999. There were a lot of numerical simulations by Othrisker and Reins from 1993. But then the final result came out. This is a theorem. It's a very complicated paper. But the final theorem in this paper is as follows. If you take the length of the longest increasing subsequence for a sequence of size n, which is a random variable, you take out two square root n. And then you divide by n to the power of 1 sixth. This new random variable, chi n, has a probability, the cumulative distribution function, which in the limit n to infinity converges to the f2 of x. So to the tracy-widow distribution corresponding to the GUE. So we have a completely unexpected link between a result that first appeared in Ranomishka's theory. And look, this, I mean, the history could have been easily reversed. We could have discovered this result first. And the distribution of the largest eigenvalue later. And then we would have said, well, it would have had another name. It would have been the bike, dived, and Johansson distribution. Too bad for tracy-widow. Okay? They could have discovered it first. And then we would have discovered later that the same function rules the distribution of the largest eigenvalue of Gaussian random matrices. A connection that is completely non-obvious. Because there is no random matrices in there. And yet these two objects are described in the same distribution. Okay? Here, if you are not convinced, here are numerical simulations. Meaning, well, you can produce large sequences of numbers. You can produce permutations of these numbers and do exactly the same operation. You take the length of each permutation. And then you scale to root n. You divide by n to the power of 1, 6. And then you complete the histogram of it. And then you see that the points fall nicely on top of the tracy-widow curve as the top eigenvalue of the GUE would. But these are taken from the problem of permutations and longest-increasing subsequence with uniform probability. Okay? Yes. So this, okay. You can just read the quote by Shakespeare. I like it. Okay. So this is not the only example where tracy-widow distribution has appeared completely, you know, in a completely unexpected way. I will not prove any of these results. The proof is just too complicated. It's not even particularly illuminating or anything. It's very technical. What I will do is to introduce now another technique, which is very, very useful in random matrix theory, to discuss the problem of large deviations for the largest eigenvalue of Gaussian matrices. Now, let me describe a bit some, let me give you some background. So large deviations of the not just. Okay. Question. What is the joint probability density of the eigenvalues of Gaussian matrices? Can you help me? So Gaussian matrices, GUE or GSE, what is the joint PDF of the eigenvalues? We've seen it over and over again, right? Can someone help me? Coffee? We'll ask you one by one. Remember, we have a Gaussian bit as if the eigenvalues were independent Gaussian variable, but then we have another geometric factor which correlates them all, right? So apart from a normalization constant, let's call it Bn beta. Well, I sometimes call it zeta. We have exponential of minus one half summation, right? And then we have, you remember? Okay. So you have the Gaussian weight, which is a model specific. It will be something different for another random matrix model. But then you have a geometric factor, which is the Jacobian of the change of variables between entries and eigenvalues and eigenvector, which is this Van der Mond determinant raised to the power beta. Now, remember that we decided to rescale from the beginning the eigenvalues by root beta n. If we do that, we get a semicircle whose edges don't grow with n. So it is convenient to do this rescaling. So in the end, we will have a semicircle between minus root two and root two, okay? So if we do this at the level of the joint distribution, so we would have like lambda i tilde if you want. Well, the only change here is that you will get a factor beta n in front. So if you just make this correction to the joint PDF, you are describing the same Gaussian ensemble just with the semicircle between minus root two and root two, okay? Just try to convince yourself the only thing to do, the only thing I'm doing is I'm taking out a factor root beta n from each eigenvalue, okay? Good. But then we can rewrite this object in the form exponential of minus beta n alpha summation i one to n lambda i square minus one alpha summation, I don't know, i different from j log lambda i minus lambda j. So I put this term here as I do always in the exponential. Is it clear why I put this factor one half in front here? Sorry. So here we have a product for j smaller than k and here I'm taking the summation over all pairs i different from j and so I'm double counting pairs. So lambda one minus lambda two and lambda two minus lambda one should be basically this double counting should be discounted by taking the factor of one half, okay? Good. You remember why I was doing this operation earlier like yesterday or two days ago? You remember I did this already? Yes? No. Because this guy here has this very interesting form exponential of minus beta into a certain function h of the eigenvalues, right? And this form reminds us of what? Exactly. Good. So we can interpret the guess of eigenvalues as a proper thermodynamical system in canonical equilibrium at inverse temperature beta. Good. So now if we want to compute the distribution of the largest eigenvalue, so now I need your help. So I give you the joint PDF of the eigenvalues for a random matrix model p of lambda one, lambda n in this form and what I want to compute is the probability that lambda max is smaller or equal than x. So this is the task. This function encodes all the properties about the eigenvalues. So in some sense this object must be computed starting or must be computable starting from this one. The question is how can we set up a calculation for this object? What can we do on this function to obtain this object? Anyone wants to try? Yeah. So you need to integrate like this, like this, like this. Yes? Okay. Who is in agreement? Everybody agrees? No. So whoever disagrees needs to tell me the right way to proceed. So you say instead of from x to infinity we have another option from zero to x. Okay. So probably it should be from minus infinity because the eigenvalues can also be negative. So we want all the eigenvalues to be between minus infinity and x, right? Because if all the eigenvalues are between minus infinity and x, then the largest eigenvalue is smaller than x. Agree? Why is it so? So now I want to ask you some more questions. Here we have the joint pdf of the eigenvalues which we have written in this Gibbs Boltzmann form. Okay? Exponential of minus beta h. And now we are integrating the position of our particles which are described by this exponential minus beta h over a certain region of space between minus infinity and x. So in Stuttmeck what is this object? How would you call this object? We are integrating the Gibbs Boltzmann measure over the position of the particles up to a certain value x. Some sort of weird partition function. Excellent. Yes. This is exactly a partition function, right? Because we are integrating over the position of the particle but we are restricting the integration over a certain region of space. Okay? So we can call this object Zn of x where this is the partition function of a gas of charge particles subject to a two-dimensional Coulomb potential restricted to a line and such that no particle exceeds x. So this is now a completely classical, albeit a bit complicated problems in classical physics. You have a gas of particles where you put an extra constraint that they cannot lie to the right of a barrier at x. And then if you release the barrier, so if you send x to infinity, you would recover the standard partition function, the standard canonical partition function where you let your particles occupy the full the full real axis. You see? So you have a problem of thermodynamics of a constrained gas where you put a barrier at a certain position and you start squeezing your gas. So x is here. If you push this barrier here, then your gas will be forced to accumulate here, right? Because normally at equilibrium they would occupy a certain semicircular distribution but you're pushing this gas to be within a much narrower region than they would, than it would normally occupy at equilibrium. Okay? So now Statistic, Statmeck tells us what, what we should do, right? What we should do to understand what is the equilibrium configuration of this gas? What is the most likely configuration of, of the particles? Go ahead. You, okay. Minimize the free energy. Good. So we have this, this gas of, of particles and we want to know what is the most likely configuration of eigenvalues of particles, given that they need to, to lie to the left of this, of this barrier. So we will need to find the configuration of eigenvalues of particles which minimize the free energy, which is the logarithm of this, of this object. Do we all, do we all agree? Okay. So you see that we have, we have turned our probabilistic problem, so to find the probability of the largest eigenvalue, into a problem of statistical mechanics with, with long range interaction and in the presence of, of a barrier. In the presence of a wall, of an impenetrable wall at the position x. So all we have to do is to try to solve this, this problem. Good. So we can make now a break and then I will try to, to sketch how we can try to approach this problem. Okay. Just to, just to clarify what, what I'm doing got a couple of questions. So here we have our semicircle with an edge at root 2. So the largest eigenvalue will have an average around root 2 and will fluctuate in the large and limit according to a tracy rhythm distribution here over a typical scale of n to the minus 2 third. Okay. This is the, the analog of the central, central limit theorem. So a Gaussian with a typical scale, which is of the order of the, of the standard deviation that we have seen yesterday. Now, in order to obtain this scaling function, so the tracy rhythm, well, you have to, to do a lot of, a lot of work. What I was suggesting now is to devise a way to treat this, this problem using a stat mech analogy. And this stat, stat mech analogy goes under the name of not surprisingly Coulomb gas method. Okay. So this is one of the standard tricks in, in random matrix theory. So you, you are mapping your problem, your statistical problems on eigenvalues onto the partition function of a system of charged particle with some, with some weird constraints. In this case, the fact that you have an impenetrable wall at a location X. This, this mapping and this method is, is very efficient. It has, it has a drawback though. So it is not, is it not nearly as precise as you would like to, to, to basically nail down this, this, this distribution over a scale of n to the minus, minus two-third. So this, this method is not powerful enough. What you can gain from, from this method, and I will try to, to do this calculation between today and maybe tomorrow, what you can, you can get here are basically the large deviation tails to the left and to the right of this, of this distribution. For example, using this, using this method, you can answer the question, what is the probability that all the eigenvalues of a Gaussian matrix are negative? This is clearly a very unlikely event which becomes exponentially less likely when increased than the size of your, of your matrix. And this, this type of, of question basically for this type of questions you are probing this region of the distribution and, and, and here you have access, you have access to those using the Coulomb gas, gas method. Okay. So, so it's, it's a trade-off. If you want a more, you know, a more specific and more precise information of what happens around the mean, you need to, you know, invest a lot of time learning the tricks that led tracing William to, to find their, their function. If you are happy with some information about the large deviation tails, then you can use a faster but less precise method which is called the Coulomb gas method. This is the one that was describing, that was started to describe now. Okay. Is this sort of, sort of clear? The starting point is always the same, probability that lambda max is more or equal than x, but the methods to investigate the large and limits are different or much, much less sophisticated in this, in this case. Good. So Coulomb gas method, you will need to learn it because it's, it's very interesting. It is, it is related to the physics of, of the eigenvalues as a gas of, of charged particles. So the goal, what we had to compute is to estimate what happens for large n to this constrained partition function. I call it now w instead of x. It is the same object as before. So exponential, I just rewrite it slightly, but is the same object I had before, one over two n summation i one to n lambda i square minus one over two n square summation i different from j log of lambda i minus lambda j. So the only thing I did is I put an n square factor in evidence here and divided by n and n square here and just renamed x by omega just because just by w because this is the way I wrote the notes otherwise it would be you know a mass going from one convention to another. It's the same, it's the same thing. So what we want to estimate, this will be our partition function and we want to know how the free energy of this constrained gas will behave in the limit n to infinity. Okay. In order to do that we will need to estimate the exponential growth of the partition function in, in a, in a rather precise, precise way. Okay. So what happened to this object or to the logarithm of this object as n to infinity? This is our goal. Okay. So the Coulomb gas method proceeds through a series of dirty steps that mathematicians don't like. That's why I'm doing that. Okay. So the first dirty step to carve out the large n behavior of this object is introduce a counting, counting function. So let me explain what do I mean by that. I will just proceed formally and then let's, let's try to see what, what this object means. Let's call it n of x 1 over n summation i 1 to n delta of x minus lambda i. So the counting function is basically the, at this level is the known average version of the spectral density. Okay. So it is this, this sum of spikes at the location of each, of each eigenvalues. Okay. Sorry. Well, yes. Yes. But apart from, yeah, apart from a trivial, a trivial risk, well, well, even for fractions you need to count, right? Do, do you agree? Kind of. Are you a mathematician? I didn't, I didn't, I didn't expect to, to be stopped at this, at this early stage. Okay. Now, why is this object useful? Well, because of a couple of dirty tricks which corresponds to our second step of, of the process. Okay. So instead of, so instead of summing or integrating over lambda 1, lambda n, which is the operation we need to perform, we do something else. Okay. We call lambda 1, lambda n microstates just to make contacts with the language we use in STATMAC. Okay. A microstate is a configuration of particles arranged in a given, in a given order on the real line. These are microstates of our fluids. So instead of summing or integrating over lambda 1, lambda n, which is the, the thing that we should do, we do something else. So we first fix a certain profile, certain function n of x, forget the definition, fix a certain function n of x. And then this n of x will be non-negative, smooth, and normalized. And then we sum over all microstates, which are compatible with the n of x that we fixed. So this is a long, a long series of words. Actually, the concept is, is quite simple. We have here a certain microstate, but we might have other microstates, so other arrangement of our particles that if viewed from a macroscopic point of view would give rise to the same density profile. So this is the same thing that we do when we study gases in a standard statistical mechanics. We might have different arrangements of the particles in your gas that gives rise, that give rise to the same macroscopic properties, to the same volume, pressure, you know. So all we have, all we, we are trying to say here is that instead of summing over the microstates, so over all the configuration, the macroscopic, microscopic configurations of the eigenvalues, we first fix a certain macroscopic density profile, and then we integrate over all the microstates that would contribute to the same macrostate. So it is just another way to perform the same, the same summation, but introducing a smooth density function, which is what we call the counting function there. Okay? So it is the exact translation of what we always do in classical statistical mechanics, except in a slightly different setting. Good. In practice, how this operation is carried out? Well, introducing another of the dirty tricks, which is a quite fancy representation of one in terms of a functional integral. Have you ever seen anything like that? You have, but maybe not explicitly, but you have. Okay? So here we are saying that if we are, if we integrate over all possible non-negatives, smooth and normalized function n of x, such that this n of x is exactly equivalent to its definition, then the integral over the whole space of function defined in this way should give, should reproduce one. Okay? Because this integral will just single out one of the functions, the only one that corresponds to its definition. Okay? So this is the functional version of the standard identity for, for delta, the standard delta function, right? Is that clear? Okay. So now we have a very interesting representation of one that we insert into the integral we had before. So what we obtain is that by exchanging the order of integration, we can rewrite our partition function as a functional integral over smooth functions n of x of what? Of the integral between minus infinity and w of d lambda 1, d lambda n of the exponential of everything else times this delta. So for, for a given profile, for a given density profile n of x, I'm basically integrating over all microstates that are compatible with this density profile because this density profile must be equal to this object. Okay? And then I'm integrating over all possible smooth, non-negative and normalized functions. So I'm doing this, the initial integrations just in two steps. I'm first grouping all the microstates according to a given microstates and then I'm summing over all possible microstates. But why is this trick useful? Because, because now we can, we can use the property of, of this delta function to simplify this multiple integration. We'll show you how to, to do that. The fact that we have introduced this counting function allows us to use the identities. So summation i1 to n of f, any function of lambda i, this can be written as n integral dx f of x of x. So all you have to do is to replace here the definition of n of the counting function in terms of the delta, the sum over delta function of x minus lambda i and carry out the integration. It's clear? One over n, this one over n will cancel this n. Then you pull this summation out and then this delta function will, will, will kill the integral replacing an ever occurrence of x with lambda i. Okay? So now we can convert sums with integrals, into integrals and double sums into, well, guess what? Double integrals, right? So why are these identities important? Because in the, our Hamiltonian for our particles contain exactly single sum and double sums. For example, in the Hamiltonian you find a term of this form, right? So this, this term will, will be equal to what? Just use the first, first formula. So this will become n integral over dx n of x, x square, right? And in the Hamiltonian we had the one over two n in front. So this object is one over two n times n. Then for the logarithmic interaction term we have to do a bit more work because you remember in the Hamiltonian we have a logarithmic interaction term of this form summation i different from j log of lambda i minus lambda j, right? This is the term that we had in the, in the Hamiltonian which is not exactly of this form because the term i equal to j is excluded by this, by this sum. The reason is that if, if i becomes equal to j this contribution becomes infinite. Okay? And this infinite contribution has a very physical reason because if, this means that if two charged particles are made to coincide then basically this is the, this is the contribution to the self-energy of, of an election. Okay? So we need to find a way to take this infinite contribution to the, to the energy away from, from the sum. So to, to to renormalize it. The way to do it is to write, rewrite this sum formally including the term i j and then subtracting the term i equal to j. Okay? So formally we can write this as one over two n square summation over i and j all logarith of lambda i minus lambda j but then subtracting the term that we can write as delta of lambda i. So this is basically a short distance cutoff between the two elections which renormalizes the self-energy interaction when i is equal to j. The precise way this, this object is, is defined is, well this would be in principle a problem but it turns out to be irrelevant for the, for the calculation. Okay? So this, this correction term would be subleading anyway. So although it's precise, precise form would be important in general, it turns out that for this specific calculation it is not, which is a good thing. So we can, we, we can forget it for, for the moment. So this object here will then become equal using the second property here to one over two n square times n square dx dx prime nx nx prime times log of x minus x prime minus a correction term that we can neglect. So I'm just considering this, this first bit here and I'm just using the second property. Is that clear? Okay, excellent. So all we have to do is to take this expression and this expression and you plug them inside the Hamiltonian here. So we are replacing sums with integrals which is a very, very good thing because then, you know, like a, a continuous action will be easier to deal with than a discrete action. So we are replacing sums with, with integrals and we will have to, to deal only marginally with a complicated integral that we can, we can anyway solve. Okay, so can I erase here? Yeah, no? So let me summarize the, once we have done this, this tricks, so this object we can leave it. Is everything all right? There's a huge, I sparked a huge debate. Is everything all right? No, like, okay, let me write the, the summary and maybe it will become, it will become clear. So our restricted or constrained partition function will be equal to what? Well, we had the functional integral over counting functions and then what, what we have? We have this, the exponential of the Hamiltonian, which is now no longer a function of lambda 1, lambda n, it will be a function, it will be a functional of the counting, the counting function. So we can bring this object outside the integral over lambda 1, lambda n, right? And write it here. So this will become exponential of the quadratic part, which is minus beta n square times one half integral dxx square. And then we have the interaction part. Agree? So this, this is just the, the Hamiltonian rewritten in terms of the continuous variables. The quadratic part and the interaction, the interaction part. And now what is, what is left? Yeah, what is left is this delta function because we need to make sure that the profile we are integrating over are compatible with the, with the, with the microstates. So we need this part and this part is still to be computed. So this is multiplied by integral d lambda 1, d lambda n delta. Now let's, let's test a bit of physical intuition. What is this object here? So we, we are giving a mathematical expression to a very important physical object, right? We, what, what are we doing here? So we are, here what we are doing is that we are counting how many microstates are compatible with the given microstates, which is the definition of, of an entropy, right? So the, the logarithm of this object would be, would be the entropy of your configuration of eigenvalues, right? So we are giving a mathematical representation of the entropy of our gas. We are integrating over all possible configuration of microstates, which are compatible with the given microstates. Make sense? Good. So the, the only thing we, we would have to do is to perform this, this integration. This can be done at least in the, in the large n limit. I will give this as an exercise with a pointer to the, to the place where you can find the solution. But there is a good, a good news is that this object, let's call it i n of n x n w, scales for large n as exponential minus n integral dx n x log n x. So you see, for, for large n, we have here an expression that looks really like an entropy. So we have n x log n x integrated. But the important thing is that it scales as exponential of minus n here. Instead, this term of the action scales as exponential of minus n square. So, so the good thing in, in this type of calculations is that this gas of particle is energetically dominated. So it is, the free energy is dominated by the energetic component while the entropic component of the free energy is sub-dominant, is sub-leading. Can you, can you give a physical intuition why this gas is dominated, the free energy of this gas is dominated by the energetic component and not the entropic component? Yeah. So this is a system that is long range correlated. So we have interaction, which is a pair, pair wise, but long range. Okay. So, so the, the energetic contribution would scale as n square because n square is the order of the number of pairs of particles that are interacting. So we are in a, in a regime that normally we don't consider in standard stat mech problems where the energy, the internal energy and the entropy are assumed to scale in the same way with n, are assumed to be extensive typically. Here we have a completely different situation. The entropy, the logarithm of the entropy is, the logarithm of the counting function is extensive. So the entropy is extensive, but the energy is super extensive. It goes as n square, not n. And this is a very, very good thing for, for us because we can neglect this part in the larger limit. The entropy will not count. Only this, this part will count. Excellent. So, so now all we have to do is to extract the leading order in n of this, of this action. Sorry. All, all the correction terms are order n or lower. For example, the, the self-energy correction term will be of order n in the exponential. Okay. So, so we, we have a lot of correction terms. And if this object was of order n, we would be in trouble because if this was of order n, we would need to keep track of all the correction terms. But the fact that this leading term is n square saves us because this is the only term of order n square. So it wins in the, in the larger limit. It is leading. Okay. So we turned a potential disaster into something that we could actually work with very well. So our constraint partition function is of the type functional integral of exponential of what? Of minus beta n square into some functional of n of x plus corrections. And this functional, this functional here will clearly depend on omega or w. So what is this functional? This functional is just this, this guy here. Right. So it is one of integral between minus infinity and omega the x and x, x square minus one half integral between minus infinity and omega the x, the x prime and x and x prime log of x minus x prime. Now you tell me what, what we do with this. So we have, we have a standard functional integral with an action that scales with, with n square. So what is the most probable, most probable configuration n star of x that gives the, the largest contribution to this, to this integral? Yeah. So how this, how is this approximation called? Yeah. Saddle point or Laplace approximation, right? So the largest contribution to this integral comes from the minimizer of, of this object here. So what we have to, to compute is the counting function that is such, it minimizes the exponent. Do we agree? So if we compute this, this object here, this profile n star of x, physically what, what is this, this object? n star of x, the solution to this minimization problem. So here we have x, our barrier. So n star of x, sorry, n star of omega, well, sorry, the barrier was at omega, n star of x is here. Okay? So once, once we have solved this, this problem physically, this n star of x is what? Yeah? Yes. So this, this n star of x will be the typical density of, of particles in the presence of a barrier at omega, right? Because it will be the counting profile that minimizes the action. Okay? Is that sort of clear? So what do we expect it to be? You have, you have a gas of particles that would like to repel each other because they are charged. Okay? They, they are sitting here. Now you arrive here with, with a wall at, at the position omega. At some point, the wall will be far away. So your particles will, will not care. They will stay in the same position as if the wall was not there. So what, what would be this, this standard configuration of particles without the wall? The semicircle, right? So if, if w is sent to infinity, we are, we expect to recover the semicircle. The semicircle is when, where your particles sit comfortably and nobody's deserving them. Okay? But now we have this, this wall here that is approaching. Like, Okay? Like shoveling, snow shoveling, you know? That's, you have the semicircle here. And now this wall is at some point touching the edge of the semicircle and start pressing, starts pushing. So what will happen to the density profile? Physically. Just, just, just really think about it. Your, your particles that don't like to stay close to each other, but there is this, this wall which is pushing. So what would they do to minimize their total energy? Any, any idea, any suggestion we can discuss? It's, it's just a very physical, very physical problem, very physical image. Right? There are several possibilities. For example, the, the gas could, could all move to this, to a region far away from, from the, from the wall. They could decide that the least energetic thing to do is to move along with the, along with the wall. Why this, this is probably not the thing that, that happens. Sorry? Yeah. It is not that probable, but why? Yeah. But, why is it, why is it like this? I mean, fine. It might be true, but I don't see it. Why, why can't they just move all in this direction? Just a semicircle, but translated away from the wall. Sorry? Well, yeah, but, but the same is without the wall, you have a semicircle, which is, I mean, they, they arrange into, into a certain shape. Why can't, can't it be a semicircle just a bit far away, further away? I'm just asking about your intuition. I mean, I can give you the solution, but why? Let's, let's discuss about it. We have a, yeah? Yes, exactly. So we see that they are sitting in a harmonic, harmonic well, which is centered at, at zero, which means that if we move the semicircle away, the, the gas will increase a lot is, is internal energy, because it will sit on, on not in the minimum of the well anymore. So if we have a quadratic potential well, that's, that's why we have a semicircle here. But if we start moving the semicircle here, the internal energy of the gas will increase a lot. Having a semicircle here is very expensive energetically. Right? So this is not, this is for sure not, not what, what is going to happen. So the, the configuration that will minimize the energy in the presence of a wall here will be most likely something like this, right? So the density will start to increase a lot around the wall, exactly as you, as, as no will, would do if, if you start approaching like a bunch of snow with a, with a shovel, right? So the, the, the gas particles will start to accumulate around the wall because they, they would like to overcome the wall and, and reproduce the semicircle. You are pushing here with a shovel and the particles will, will try to stay as close as possible to the edge of the semicircle. So it means that they will need to accumulate close to the wall. So the, the density profile that we are expecting to find here will be something that will have a divergence at the location of the wall. That's physically, that's physically obvious, right? Because, because the, the, the gas of particles would like to stay in their semicircle unperturbed configuration, but they can't because there is a barrier right inside the semicircle. So they cannot be to the right of it. So they are, their best preferred solution would be to accumulate here, agree? Sort of. So now we need to try to find this, this profile of the density from this calculation. Okay? So let's just write the southern point equation and then we can finish. So let's perform the functional differentiation of this object with respect to n. Can you help me out with this? So the best configuration of our gas will satisfy a certain equation, which is this one. Okay. So if, what happens if we functionally differentiate the first bit? What do we get? Sorry, just pick up loud, loud, loud. I just functionally differentiate this first bit of x square. Then we have a problem. You said, this is what you said. This is what you said. Well, this, this can't be, can't be right, right? Done a functional integral before? A variance. Okay. Just give random answers. Yes. So I want to do this operation integral. So a linearly diverging integral. Yes. So the functional integral, the functional differentiation kills one integral and leaves all the rest. So the result of this object here is one of y square or actually x square. Now let's do the same thing here. We have more and more difficulty. Okay. So let's, let's try to apply the same principle that led here, here. So we know that the functional differentiation will kill one integral and then it will act on the integral as a standard derivative. Okay. Now the only problem here is that we have two functions of the same type in the integral. So the only thing that will happen here is that we get a factor of two. So the solution of this object here will be two times one single integral because one has been killed by the functional differentiation. And then we will have dy and of y and then log of what? x minus y, right? Because we are taking the functional derivative with respect to x and of x. So that's, that's the result. And now this object here should be equal to zero and this will give an equation for our density profile, equilibrium density profile. So what we are going to do tomorrow is we are going to rewrite this integral equation for the density profile and then we will try to solve it. The solution of this, of this integral equation will give us the profile of the equilibrium density in the presence of a wall at the position omega. So we expect that this solution will be the semicircle when omega is larger than root two and then something else when omega is smaller than root two, okay? And then once we have the equilibrium density of the gas in the presence of the wall, then we are basically done because this will be the most, most probable configuration. So we can estimate this integral with a subtle point approximation. So in the large and limit, this integral will go as exponential of minus beta m square times e, the functional evaluated at the equilibrium density. And so this object is basically the free energy of the constrained gas. And so our problem is essentially solved, right? So all you have to do is to try to think physically at what happens if you have a set of charged particles that try to repel each other, but you are, you're sticking a wall right in the middle of where they would like to stay. And you don't let them overcome this wall. So they need to stay to the left of the walls, but they want to be as comfortable as possible given this, given this extra constraint. So what can they do? They will arrange in a certain different profile, density profile, which will not be the semicircle. And most likely they will try to be as close as possible to the wall, because if the wall was not there, they would immediately be ready to reconstruct the semicircle. They would be immediately be ready to swim, to hop over on the other side of the fence. Yeah, for the, yes. Where can I find it? Yeah, I'll find the reference and I'll ask, I'll ask Erika to circulate it. It's a PRI paper where they do basically this calculation in full, including the calculation of the entropy, of the entropy term. So I'll ask her to circulate the PDM. Okay.