 Fluctuations of the free energy of spherical Sherrington Kirkpatrick model. Okay, let's thank the organizers first for inviting me here. It's really nice environment and with lots of people. Okay, so before I start, let me start with an advertisement. Okay, I was told that I should be standing here for the feed-off of course. I already forgot at the very start of it. Okay, so we're having a summer school here, but we'll have a summer school in Ann Arbor, Michigan next year. We had a similar summer school last year, a two-week program with four speakers. And then we got a very positive response from students, and I saw that many of them here and they are collaborating each other or having made some good friendship there, so we decided to do it one more time. So here it is, and then Ivan and Yonah will be there, and also Antinoleus and then Sasha, so then as well. Okay, so I know that this is not here, they are in the problem session, most of them, but there are other advisors and let's just let them know that this event is going to happen. All right, so about the talk. So this is a joint work with Jihoon Lee, who is in KAIST, based on the following two papers, which we wrote together last two years. So the subject is, so let's think about the random matrix, the large second value. So you hear the large second value here. So by the minimax principle, this is the maximum of this quadratic form where the x is a vector, a n-dimensional vector with norm 1. So then on that track, and then we know that lots of things about this lambda 1 for symmetric matrix M, we're going to read the matrix, we know that this lambda 1 converges to some number and it fluctuates like a trait with the distribution. The question that we asked ourselves was that, okay, so this looks like a max, and how about you replace the max by the finite temperature version of it, and what was that going to happen? Okay, so here's a finite temperature version of that you can imagine. So instead of max, you're going to just bet up there. So instead of maximum of a unit sphere, you integrate over a unit sphere, or e to the beta of the same sort of Hamiltonian. And because e to this, something, so take a log and maybe divide by beta. Okay, so this is what we call a finite temperature version of it because if you take beta to infinity, beta is inverse temperature, take beta to infinity, then e to the max of this object will be, hope maybe the main contribution, taking log of that will give you basically this object. Okay, so this is a finite temperature version of that, and the natural question would be what's the limit and what's the fluctuation and so on and so on. Okay, and we didn't know, so we also, we did not know the spin glass at all at the time, and we started doing this project, and then later we just learned that this is exactly what is called the spherical spin glass, spherical Scherington's cockpit trick model, SSK model in spin glass. Okay, so let me define what it is and then make a connection there. Okay, so think about the random symmetric quadratic function on sphere. So here's sphere instead of norm one, I just say make it norm n, root n. And here's J sum matrix, symmetric matrix here, Jij. And then as a function of sigma, the vector sigma on the sphere, think about this quadratic function. If you make it J random, then this is going to be random quadratic function. And let's make this matrix J. So J is a random symmetric matrix. We call them real regular matrix here, right? So make the variance of that to be all the one of n, so that the eigenvalues of J in a finite interval by that support. Follow the same circular. In our normalization, it will be from minus two to two as a support. Okay, for example, so, okay, so this is a natural question, right? So if you have a random some sphere and then you have a quadratic function, if you take that quadratic function to be some kind of random, what can you say about this quadratic function? Okay, but that's exactly the eigenvalue problem in the following sense, because if you look at the critical values of H on the sphere, constraint on the sphere, if you take a derivative of this with respect to sigma i, and then put the Lagrangian multiplier there, that derivative is equal to lambda Lagrangian multiplier times sigma i, which is exactly eigenvalue, eigenvector problem. So meaning that the critical values of this quadratic function on the sphere is exactly the eigenvalues of J. So two, so-called two-spin. So these two will be the fact that there are sigma i times sigma j, so quadratic, so quadratic version. So two-spin SSK model is defined as a following random measure. So suppose you are given J, so fix your J, take it as random but fix, given that J you put a measure on the sphere defined by this, this gives a measure. So e to the beta inverse temperature times this Hamiltonian H and divided by its partition function. So if you fix your J, this will be a certain measure, then you're changing your J, which is drawn randomly from your trace of symmetric matrices, and then your measure will be changing. So the behavior of your spins will be changing according to that. The division of the spins will be changing depending on how you choose your J. The question then would be that average on the spin, well maybe average is on J, then how does the spin behave and so on and so on, the subject called spin glass. And one of the first objects one can think about is free energy, which is log of the partition function, normalized one over n times beta. So if I write it down, there's going to be this integral. So it's log of this integral of root 10 size sphere, e to the beta H over the omega, this surface integral. And the largest eigenvalue of matrix J there. So here because we entered it over 6 sigma, this is a random variable depending on J. J is your random variable, and this is some kind of random variable instead of random measure. So the eigenvalue of random matrix J is a particular case of this one when beta is infinite. In other words, the temperature is zero. So the largest eigenvalue can be thought as zero temperature of free energy. So then our question is for the finite temperature, what can you say? And of course, this area of this spherical showing the cat called Patrick model has a huge history. We learned there's a huge history by itself and it's a huge subject. And then the random matrix sits in somehow a very special case there as a zero temperature case of quadratic function. So more generally, you can think about much more general framework. Instead of quadratic symmetric function, you can think about random symmetric monomial of degree P. That's called the P spin SSK, where the Hamiltonian is now, here's like a J of I sub 1 to I sub P. So it's indexed by a P variable, P indices, and it's symmetric. So you can think of it like a random tensor, maybe. And instead of thinking about the monomials, you can think about the mixture of this one, that's general polynomials, or maybe even more general analytic functions, and so on and so on. So you can think about kind of random function there, on sphere. And of course, you can start changing the sphere to other things. The classical theory, I mean more, not the spherical SSK version, but something called the SK version, which is Schrodinger-Cock-Patrick, which is a more fundamental object in the field, is the case when instead of sphere, you look at the vertices of hypercube. When the sigma here is not on the sphere, but rather takes only plus and minus one. That will be very different, because in the free US case, on the sphere, if you have eigenvector in one direction, then you can achieve that direction. But in the hypercube, if your eigenvector direction may not be in parallel to one of these coordinate vectors, then it's very different. So this whole area is basically, you have a random function on some kind of manifold or graph, and can you say anything about those things? So that's the whole entire big area, and it started from physics in the 1970s, and has huge development there, and also mathematically, but more recently there are also applications in computer science. But in this talk, I cannot say anything about those in general. In this talk, we'll be thinking about just one particular thing, which is limiting the distribution of the free energy of just two-spin SSK. And I say this from the very first moment here, that the two-spin SSK model is simplest among all these. So there's a spherical version versus this hypercube version, and also a general polynomial version versus a quadratic function version. And this two-spin SSK is sort of simplest, and it actually behaves quite differently from other models. One of the examples is that, for example, if you look at the number of critical points, because in the quadratic case, the sphere case, then the eigenvalues, there are two n critical points. But whereas if you have a p-spin model with p greater than or greater than 3, then it is known that the number of critical points are actually exponentially growing in n. So there is also a huge difference there already. And for example, then how many critical points there are in p-spin model was greatly studied, by Tukar Fingo and then Gerald Benneros a few years ago. Right. So what I'm going to talk in here is only for two-spin SSK, and the result is not going to be extendable to other cases, and this is a very isolated case. That's what I want to say. And then the other cases does not seem to have random matrix distributions coming in that area. You will see that if I start talking about the results. Okay. So before I talk about the limiting distributions, let's talk about the low-large numbers first. So the first-order limit of the free energy. So this is known, this is a well-established subject in the general either SSK or SSK in full generality. So here the SSK version, this is the spherical version of that. So what is known as a Parisian formula, which is a famous formula in 1980, is that it's fn as n tends to infinity converges. And he gave out some formula of this f. The formula is very complicated. One has solved some variational problem for a measure and they have to stick that in. But the solving variational problem is not quite explicit, and there has been extensive study of solving the variational problem. And for example, what is the support of the variational problem and is there any absolute continuous parts or any direct masses and so on and so on. But this is a physical work using Brooklyn symmetry replica method. That one is proving that it took quite a long time and that was challenging in this area. But it's done by Aguera in some degree to some degree and it was definitely served by Pallagrand in his famous work in 2006 and further generalized by Panchenko, but there are many other works in that direction. And these work for Gaussian J, but the Gaussian can be dropped, but their university is proven by these people. So this was for the SK model, which is more difficult to learn. The easier version as SK, which was interesting a little bit later than the SK model as a sort of simplification of SK. For that, the usual contribution is Crescenti and Somers who gave a Parisi formula version for the SSK, but a particular case was already studied by Kosselitz and Thales and Jones in 1976, and I'm going to go back to that paper later. So if you just focus on two-spin SSK, which is a subject of our talk, so I said that there is a limit F so limit F, if you think about the zero-temperature case, which is a random matrix case, then this F should be the limit of the large-size value. In our scale, it's going to be 2, just number 2. But if you put a general temperature, it will be different. Generally, the writing down this F explicitly is not so easy, but for two-spin SSK, it's already done, and then one can solve that variation problem, which is not too difficult. And in particular, it is C2 but not C3, and there is a critical temperature. So the beta is half the critical temperature in our scale. So a small beta, which is high temperature, it's also linear. When beta is bigger than half, it's given by this form. And here, as beta to infinity, the leading time is 2. That 2 is not a coincidence. That 2 is the edge of the semicircular as it should be. So that can be obtained by Panchenko and Telegram, but Alice Guion and Maida already have this result in a slightly different version. And also Kosolytsian and Thales and Jones paper that I mentioned has this formula, not rigorous, but almost rigorous. So that was the sort of the limit F, and then what we're interested in is the fluctuations. So for the fluctuations, before our work, these are the ones that were known, I think. So 2 spin at 0 temperature with beta equals infinity, that's random metric theory, so that we already know the result. So that's the largest eigenvalue, and therefore, in our scale, it converges to 2 and it fluctuates as 1 over entity 2 third, right, that's the scaling of fluctuations. Now we all know well, and it converges to Treshe-Widam distribution. It's proven by Treshe-Widam and Peter Forest as well, and Sochnikov proved in lots of Wigner matrices and further generalized now in this huge university class that was developed over the many, many years. So there's the whole subject itself in random metric theory. For general temperature other than 0, when beta is not infinity, and the famous result is by Eisenman, Leibovitz, and Ruer, which says that for 2 spin case, they did it for more difficult case of SK model, not the hypercube model. When beta is less than the critical number, so it's high temperature, so beta is very small, they showed that free energy fluctuates like 1 over N, and the limit is given by Gaussian, and Gaussian has some explicit covariance, sorry, variance here, and their work was reproduced by several other people, Florolich and Jekyllinsky and Kometsch and Nouvez gave a completely different proof, and so on and so on, and this 2 spin version, there are proofs of this kind of theorem for high temperature as well, with different fluctuations. So note that here the fluctuation is 1 over N, so if this was like a classical central limit theorem, scaling, it would be root N, but here's N, so you have to think about that. So this is only for high temperature, so there's a missing gap between beta from half and infinity, and also this is SK model. SSK model should be easier, so probably the similar theorem should be provable, but it seems like it's not written anywhere, but it should be possible to do, I guess. For general beta, the result is obtained by Cheta-ji, who's going to talk I think this afternoon. He showed something called a superconcentration, which is that the variance of FN, he had obtained an express upper bound that is bounded by 1 over N times log N. So that means that the fluctuation of FN is at most 1 over root N times log N. So basically smaller than 1 over root N, so it's smaller than the classical central limit theorem scale, and these are consistent with that, but much, much smaller than that. But he was able to do that for the difficult model of SK, not spherical version, but hypercubic version, and also for oil temperature, which is a very important contribution. So here's our result, the number one result. So with Li, the proof is actually very simple. Once we start how it's done, we'll notice that the result is here. So we have two spin as SK. Think about random symmetric matrix. Really we're going to matrix J, scale to such that it's M over root N with M is order one Gaussian variables, mean zero variance one, and fourth moment three is not important, then some constants will be changing, and all the moments are fine just for the purpose of making things easier. So here we have results for all beta, basically beta half. So for low temperature when beta is bigger than half, then we always see the trace rhythm distribution all the way. But the only change is that the variance changes by this expressed constant. On the other hand, when beta is smaller than half, then this is basically the same result as this result here, the Isaman Reboot real result. So it's the same thing, but I'm sure that this can be provable from their technique. It's just that we could also get as well. But the key part here is that for all low temperature, away from the critical temperature, you always have this entity minus two-thirds fluctuations with trace rhythm as its remaining distribution. So this is for two spin. So this was in two years ago. And around that time, there are many other papers, several papers came up, and one of them is that by Suvage and Zaitoni, they did for p-spin, but at zero temperature when beta is infinite. And you can see that that is very different from a two-spin model. So for p equals two-spin from the random metric theory, we have trace rhythm and entity minus two-thirds fluctuations. But when p is bigger than or equal to three, at zero temperature, they show that the fluctuations is given by one over n and with Gumbel distribution. So Gumbel distribution is extreme statistics of independent random variables. So instead of having trace rhythm, which would be that the several top eigenvalues are correlated, but here somehow the several top critical points are kind of independent in this case. And then you get the maximum of them and then therefore you got the Gumbel distribution. So it's completely different structure. And it is true for all p, but this is only for beta infinity and then extending that to finite temperature will be an interesting problem. So our result follows from this result, which will be much more transparent where this n to the two-thirds and the n fluctuations come from. What we proved was that for low temperature, we have this formula. So the free energy, which is random variable, is approximated by the limit, or the one limit. And then the next term would be, upon this constant, basically the large second value. In the large and limit, the free energy is dominated by only the large second value. Whereas for the high temperature case, upon extracting apart from this constant, the free energy is dominated by all eigenvalues in the form of the sum over function G evaluated by these eigenvalues where the function is given by this form explicitly. Two beta plus one over two beta is bigger than two. So this log of some constant minus x where and then this G is analytic, smooth in a domain interval which includes the support of the semicircular. So here the option is that in the low temperature the large second will dominate in this explicit way. And then the high temperature, the free energy is approximated, the fluctuation of the free energy is given by the sum of function evaluated at all of the eigenvalues. We don't have a result for that critical. That is an interesting question to think about. Any predictions? Not much, but I'll have one line at the end. All right. So of course in here, lambda one is the largest eigenvalue, then we know the fluctuations are two-thirds fluctuation with threshold and distribution. On the other hand, this one is sum of functions evaluated at lambda k. So that is what we call the linear statistics. We heard that already yesterday in the post-diaconist talk. So the general result here is that for test function f, which is smooth on an open interval containing the support of the equilibrium measure, if you look at the sum of these random variables evaluated, function evaluated at eigenvalues, it's approximated by the semicircle approximation. But the error is given by Gaussian. And then here the scaling is that this is sum of n objects and subtract to the mean. And then you do not divide by anything, right? In the central limit theorem, classical central limit theorem, you have to divide by root 10, but here we don't. So here the concentration is much stronger because all the eigenvalues are all correlated together into some degree. So one of the results has been proven in many different settings of random metrics and then extended several times. So policy diaconist mentioned this paper already, diaconist is Hsieh Ani for the unitary CUE case and I caught Johansson in the unitary ensemble case and then also a real case as well. And then Bayan Silverstein and Bayan Yao also extended these results in other our regular random matrices and so on and so on. But there are many other papers in this direction proving that the global statistics, global linear statistics does not concentrate much stronger than the classical central limit theorem and it has much smaller fluctuations. And so if you plug this one in here then that's how you get the Gaussian fluctuation. But different scaling than the central limit theorem. So how do you then get these results? It's just very simple. So we use the results from random metric theory. So Zn, which is the partition function, so free energy will be a log of this partition function. So it's an integral of the Gibbs measure, e to the beta sigma j sigma, the quadratic function now written as this inner product. You can take this matrix J and you decompose as you need orthogonal part and diagonal part and orthogonal transpose. And then you take this orthogonal part times sigma as your new variable, sigma tilde and you change the variable. And the good thing is that we are doing this over sphere. So changing the rotation of spin doesn't change the measure. So you can use that notation. This becomes now lambdae, the diagonal part. And instead of sigma, sigma tilde, I scale it so that we have n outside. So beta n, xi squared, and on the unicycle. So this is an integral that we have to do. So already that the eigenvectors all disappeared. It's all depending on these eigenvalues. Now we want to compute this. So let's think about the following integral, qz. z is a parameter here. So this part is essentially this one here. And then I take e to the minus z sigma summation of yi squared and do the n-dimensional integral in the whole Rn. And I compute this qz in two different ways. One is that this is Gaussian integral. So we can do the Gaussian integral and it's here. Another way is that I'm going to use polar coordinate system to compute this. If I do the polar coordinate system, so this sum here is going to be R squared, but I scale it to R by writing R squared to be R tilde. So this is going to be R. And the rest, so R part is going to be 0 to infinity. There's a sphere part of the integral. And I dumped that part in this IR. So IR contains this sphere integral. This is R to the n over 2 minus 1 comes from this Rn part right from Jacobian going to polar. And then this part is here R comes up because this yi squared was in R, but the x i is in the sphere, so there is R coming out. And this IR is exactly what we want to compute when we insert R is equal to beta n. So now qz, which is no from this Gaussian integral, is equal to this integral of R over R of this function IR, and this is exactly the Laplace transform of IR. So if you recognize this as being a Laplace transform, you can take inverse Laplace transform to get IR. So IR is obtained by taking inverse Laplace transform of this qz, which is given by this Gaussian integral, and then inserting this R to be equal to beta n. That's all you have to do. So therefore, you get this Zn equal to inverse Laplace transform. It's a single integral, right? So it's a single integral with respect to the Laplace variables of this one times some other factors that comes in here. So some other factor that comes from here. All right. Yeah, sorry. This is coming from the inverse Laplace variable. So here, the condition is that the integral should be in the vertical direction where it's to the right of all the single letter, this. So the gamma, which is the vertical integral, should be to the right of all of the eigenvalues. That's the technical condition, but that's a very important condition. So of course, when we got this formula, well, we were sure that somebody had done this before. Of course. And then the Cauchy-Rechend, Thales, and Jones paper already had this formula. And also, in a similar version, in the random matrix theory context, Manu Moore and Dong Wang, who is here, also had obtained similar formula, and they analyzed that in their random matrix calculations. All right. So this is the one that we have to compute. So the lambdas are... So what's in here? So it's some integral involving eigenvalues. So eigenvalues are random. So here are some kind of random numbers inserted in your integrand. And I want to compute the asymptotics of this single integral. Okay. So I wrote in this question here, the integral of e to the n over 2g and g is 2beta z. So temperature is in here. Beta is only there. This only places that beta appears. And sum of a log of z minus lambda k. The lambda k is here. So if you give me lambda k, I put this function here and then take this integral, and that's my partition function. So this being a single integral, now the natural thing to think about is, well, we want to compute, do it by using methods to be decent. But the twisty here is that the integrand is random. So it's a stupid decent analysis for random integral. Okay, that sounds a little bit scary. But you remember that your integrand is random. But however, we know that the eigenvalues are very rigid. And that's one of the important things about random metrics. The eigenvalues are very, very rigid. They are not really, really moving around. They are sort of almost locked in in their classical locations that we expect them to be. When they move, they move all together. So that's the feature of the eigenvalues. And therefore, since they are rigid, maybe we can replace this lambda k by its expected locations. And also, the integral can be controlled even though there's a little bit of randomness. Okay, so the methods, for instance, can be applied. That's the message here. So dimension is rigidly now. That's why I want to highlight that explicit boundary that the people obtained. So it's, in my opinion, one of the most important features in the random metric theory. So Odache and Yao and Yin in 2012 paper show that with high probability the random object is lambda k. So this is all of eigenvalues all together, right? Simultaneously.