 Lastly, all of them is very close to its classical location given by the quantile of semi-circular. And with a very explicit error bound. And this is almost as good as you cannot do anything better than that. And there is this explicit error bound. And we can utilize this error bound when you compute this integer. So this is the only page in this proof remaining. The only page that I have here. So what Kozler and Thawless and Jones did was that they did sort of formal computation of the leading term of this integer and then got this limit of fn to the leading order. Of course, at that time, the rigidity is not proven. And so on. So it's not rigorous. But the result should be true. What we did was that we made their analysis rigorous, first of all, and also computed the next order term, giving the fluctuations. So remember that the function that we have to analyze is g is given by this form here. Beta is there. And the lambda k is there. And n is going to be large. So we have to do the critical points. So if the critical points should take a derivative of z, then we get this equation. And this is the stress transform. The stress transform equal to some number. That's what we have to solve. So now you try to plot this graph maybe and then think about it. And there is two different things happening, depending on your beta. When beta is small enough, then you can show that this integral, you can approach it by semicircular law. That is OK to do. And then by solving the equation, your z critical is around this point, which is so when we do this integral, the important thing was that our contour of the integral should be to the right of all of the eigenvalues. So in the limit, those eigenvalues will be in the support of semicircular law from minus 2 and 2. And this critical point is away from that semicircular interval. And therefore, things will go through. On the other hand, when beta is bigger than half, then approximating this by semicircular law doesn't work. That's not a good approximation. Instead, what one can show is that this is something that you want to do, a computation. z is going to be very, very close to all of these eigenvalues. In particular, it will be very close to the largest eigenvalue. And one can get some estimation between z, c is going to be always bigger than lambda 1. But the difference is, one can control that difference. That difference is smaller than the difference between lambda 1 and lambda 2. So that is really, really sticking to the lambda 1. So to your question, when in the critical temperature case, I believe that the difference from z, c to lambda 1 will be of the same or similar order as the distance between lambda and lambda 2. And that will make computation complicated. And then here's some lemma. Then once you have that, then here's some lemma for all beta other than half. This fn, which is log of this integral, you can really be decent. And it can be approximated by g value that does the critical point. And once you have that, then g is here. Now then you don't replace g by its expected mean, but rather keep that and then start plugging the z, c, either this form or that form in here, and start looking at the fluctuations. If I plug this one here, there's a sum of log of linear statistics. On the end, if you plug lambda 1 here, when there are fluctuations from lambda 1 and also fluctuations from the linear statistics, and you have to compare them, and it turns out that the lambda 1 fluctuations is the dominating one. And that's how the proof goes. All right, any questions or comments? cn is just coming from the cn, which is a constant. It doesn't depend on the eigenvalues. So it's just an explicit constant for this one here. So what we are doing here is that we've, suppose you are given lambda case, right? Right, that's right, yeah. OK, so even though this is just transformation, give us something about lambda 1, when you are computing this fn, we don't need this one. We only have to compute g, right? So I'm going to plug this lambda 1 in here. And then, even though I plug in lambda 1 here, here's lambda k. For this c of eigenvalues of lambda k, it is OK to replace it by semicircle, semicircle log. So this one will be replaced by 2 beta lambda 1 and the integral of log of lambda 1 minus x d semicircle log. And then I expand it again in terms of lambda 1. And that's how we get the... So in short, this is g. In principle, it can be complicated if we plug lambda 1 in here. But only the linear... It's only the first order approximation in Taylor series, which means there's a lambda 1. Just not lambda 1 squared or any other, just lambda 1 is the leading contribution. That one has to do some analysis to make sure that that happens, OK? Right, OK? So I'm going to change the subject a tiny bit within here. All right, so so far I talked about SSK. But then in the SSK, there is a SSK plus external field is also interesting subject. So this is a two-spin SSK. But then you may think about adding external field. Here's h times summation of sigma i's. OK, what it does is that if you want to make h is large, if you already see my's are aligned in the same direction, then this number will be big, right? If h is positive number. So this h may interfere with this j. And in this, it does. And the proof, the result of 2015 by Waco Chen and Day and Panchenko. Oh, sorry, there's a end there. Sorry, not Panchenko. Panchenko is not here. OK, Panchenko's friends, maybe. So they consider even p-spin model. They consider more general. But if you have a general p-spin, including two-spin, and for all temperature, they found classical central limit scaling. As long as this h is present. If h is any positive number, just there, then it changes the thing completely, right? So this is dramatic change. So we have to compare that with h equals 0 case, right? So in the low temperature, 4p equals 2nnn, anything, here's n. That's not n-half. And in high temperature, it's n to the 2-thirds or n. But here, this is just central limit theorem. So we can try to do this one in our framework and try to do this integral trick and so on. But then either generate the eigenvector component and one has to do something about it. OK, so I haven't done that. OK, this is something to do in the future. But instead, we thought about the easier problem. It's also a very similar flavor, but analysis is very, very quick. So let me just state that. So it's spin-less plus, instead of external field, something called ferromagnetic Hamiltonian. Instead of summation of sigma i, summation of sigma i, we take summation of sigma i square, right? So that's the change. Instead of summation of sigma i, just square it. If you square it, sum of ij sigma i sigma j, well, then this part and that part look similar. It can combine them together, so j of ij plus m is your new matrix. So this is Hamiltonian is, again, quadratic, but now quadratic part coming from this random matrix plus every entry is shifted by m. So it's a 90-min random matrix. So it's a 90-min Wigner random matrix, also known as rank 1 spiked random matrix, and we chose where to study it. So spiked random matrix, I will just remind you what that does. If you haven't seen it, then it's not a reminder, but OK. Anyhow, so what is known there is that we are going to scale m to make it interesting, m hat over n. Here, n is there so that these vectors has norm 1. That's the natural scaling that one has to adjust. What is known is that if this mean, scaled mean, m hat, is not too large, so in other words, there's random Wigner matrix, mean is tiny bit big, but not too much, then you don't feel it. And if the mean is large, then there's a huge large eigenvalue, as you can expect from the overall translation. So the transition is well known. Here, m hat is less than 1, m hat is bigger than 1. Two different things happen. So if it's not too big, then everything just looks like a mean 0 case, then the large eigenvalue will behave like a here, and just same as the usual trace freedom scale. On the other hand, if there is a large mean, pulls out one large eigenvalue outside of the support, like in here, and that one is so free to move, the way it fluctuates is going to be the classical central limit theorem type scaling one of n to the half. So the result of three is that in that setup of two spin SSK plus ferromagnetic case, it's a complete same analysis that's followed through, except that you are looking at not mean 0 Wigner, but sorry, the regular Wigner, we have a non-mean 0 Wigner, and that's the only change. And then taking into account that sometimes these two things can happen, you got the following phase diagram. So here's m hat, the strength of the ferromagnetic part, and this inverse temperature, scale by 2. Then when m hat was 0, that's the first part that I talked about, you had either spin less part and paramagnetic part, which are trace freedom versus Gaussian, n to the minus 2 third fluctuation, versus n to the minus 1 fluctuation, the linear status fluctuation, and this one is lambda 1 fluctuation. On the other hand, when m hat is large enough in this region, and this is the part that the mean of the random matrix starting to dominate, and the one outlier of our line eigenvalue will start to dominate. And there's lambda 1 away from the edge, and then you still have Gaussian, but this region for this Gaussian is completely different from this Gaussian, and the scaling is also different because it's essentially limit theorem type scaling. So this is the result, and then the analysis is similar, except that the change. So here's the final slide, so open question there. So then you can of course think about the transitions in between, in particular when beta equals critical temperature. It's very interesting. Just already using the knowledge of the random matrix that's already existing, one can get the following result easily. So the transition between this part and this part, along this line here, so spin class to ferromagnetic region, then m hat should be scaled as n to the minus 2 third. Upon that scaling, the spiked random matrix theory, the rank 1 fluctuation theory of the random matrix theory applies, and that was very studied before. For example, a complex case here. The real case was studied by Bloemendel and Virak and Moe and Dongwang here. And then that transition, so there is a well-defined transition limiting distributions that will be appearing here. And the fluctuation will be following spin-less fluctuations. But the other parts, this part here, that part here, and also triple scaling region, that's an open question. Between spin and paramagnetic part, by looking at the variance of the fluctuations in here and here, by matching them, exploiting and matching them, it seems like when beta is approaching to half plus root log n times n to the 1 third, seems to be the right scaling. But we don't have any concrete mathematical result in that direction. So thank you for your attention. So are there any questions? OK, I'm not an expert in that direction, so I cannot tell. But there are some predictions. So there are some numerical analysis, but the numerical analysis is very difficult once you have a finite temperature. So I believe there are at least two contradicting predictions. And someone at some point claimed that the critical temperature should follow trace rhythm based on the fact that one cannot think of anything else. But I don't think there is any concrete suggestions about that. Maybe it's no more than me, but n to the fourth third. Oh, yeah, that's right, yes. Yeah, that's right, yes, yeah. Yeah, I forgot to mention that here, Yanfeldrov and Raduzer computed this external field matter. Here, I mentioned that Panchenko computed this one for any h-positive, but Yanfeldrov and Raduzer, they computed it with h scaling with n. And they found interesting scaling of h with respect to n. And then they also studied the transition from h0 version to h-positive version, because there is also a traditional thing that they also computed. And also, of course, I Tony and also, I forgot about the person. Danbo, Danbo and I Tony also picked up their results and then some part of that they made rigorous. So there are all those two directions. Only large divisions. I'm sorry, I'm forgetting, yeah. They did a large division theory about that, yeah. Are there any other questions or remarks? Thank you. And the next speaker is Dumitryu, who will speak again about.