 So this does not require any further assumptions. It's just this. OK. Any other questions about this? Next thing, the random assignment problem. So there are n tasks to be assigned to n workers. And AIG is the cost of assigning tasks j to worker i. And the minimum cost of assigning these tasks is any permutation pi as an assignment. And then you minimize the total cost of all permutations. That's the minimum assignment that you can have. And when these AIGs are i, i, d, non-negative random variables, this is called a random assignment problem. So this has inspired also a large body of work. So suppose these AIGs have a probability density f, which is continuous, and f0 is 1. Under these conditions, all this showed that this cost actually converges in probability to zeta 2, and resolving a conjecture of Mizzard and Parisi. And this was later generalized and extended in these papers. And in particular, Wasland has its extensions where this condition is not required. f0 can vanish or blow up at 0, which gives very different behavior somehow. OK. Now, how are the fluctuations of the CN, this optimal cost? So first of all, any questions about this model? Is the model clear? The random assignment question. OK. So what about the fluctuations? So surprisingly, the best known upper bound is actually due to telegram of order 1 over root n, but with some correction, log n squared over log log n. OK. For general distributions, cost distributions in 0, 1. If we assume that these are exponential distributed with mean 1, then more can be done. So it was proved very early on that the variance is at least constant over n if these are exponentially distributed. And Wasland proved that it got an asymptotic formula for the variance if these are exponentially distributed. OK. However, this is actually an open question. This order of fluctuations for general cost distribution. So this is for general cost distributions, this is still the best known bound. So what I proved is this, that if AI as a density is smooth bounded at 0 and satisfies some mild-dicker conditions, then the fluctuations of CN are at least of order 1 over root n. So this goes in the other direction. OK. So Wasland told me that if the costs are exponentially distributed, then there is a relatively simple argument using the memoryless property to get this result, which I don't think has been written down anywhere. But for general densities, nothing like this was known. So I think in the next slide, I'm going to come to random matrices finally, just a little bit. Just, I think, one slide I have on random matrices. But any questions before that? No, this coupling is very different. So what you do is you scale the AIJs, but the scaling depends on AIJ itself. So you scale AIJ by 1 over root n if AIJ is less than 1 over n, and scale it by 1 over n if AIJ is bigger than 1 over n. And it's actually more complicated than that. But there is a different coupling. So I didn't talk about that. Different coupling here. Yeah, but all the other examples are the more straightforward version. So I didn't talk about this yet. OK. Finally, here is a little result about random matrices. OK, anyway. So suppose m is a random square matrix of order capital N, which is a function of IIT random variables, x1 to xn. So each entry is a function of x1 to xn. And assume that this function is homogeneous of degree r. So you take lambda x1 to lambda xn, and you apply this, and you get lambda to dr times the original matrix. And assume that the law of x1 has some smooth density, et cetera. Then as if little n and capital N tend to infinity, where this r remains fixed, where r is a degree of homogeneity, then the log of the determinant has fluctuations of order 1 over root n times capital N. 1 over root little n times capital N. So you see, if it's a Wigner matrix, then little n is of order capital N squared. And if it's some other matrices, droplets, matrix, and little n and capital N are of the same order up into this. OK, so here's just one example. I'll not talk about the history and so on for the interest of time. And I just did this last slide. So just one example. Suppose x is a p-by-n random matrix with IID entries. And x0 is a matrix obtained by subtracting of the row mean from each row. And m is 1 over n x0, x0 transpose is a sample covariance matrix for the data matrix x. So you have a data matrix x, and you take the sample covariance matrix in statistical terminology. And then the theorem says that under some conditions on the entries, log of the determinant has fluctuations of order. So you see the number of independent random variables here is np. That's all the entries of x. So it's 1 over root np times the order of the matrix, which is p. So that gives you root p over n. And this is actually the right answer. So I was surprised that for sample covariance matrices, for Wigner matrices, it has been resolved quite recently. There was a paper of Tao and Wu using four moments and then generalized to two moments by Van Wu and one of his co-authors for Wigner matrices. But for sample covariance matrices, the log determinant of a sample covariance matrix is an important object in statistics. It's quite an important object. And for p fixed and n going to infinity, there's a classical work you can look in Ted Anderson's book on that and various other sources. But if p and n are both going to infinity, the distribution of the log determinant was surprisingly open until recently. There's a paper of Kyle Young and Harry Zhao in 2015 where they dealt with the Gaussian case, where they proved the central limit theorem for the log determinant when p over n tends to c. Well, also c equal to 1, but I'm not talking about that. And this is the right order of fluctuations, square root p over n. That's the right order of fluctuations. And so this result, the coupling is very simple. Again, just the same coupling. You just scale it down by a little, and that's it. And as you can see, there are many, and this is homogeneous of degree 2, a sample covariance matrix. So it covers a lot of different classes of matrices. So I'm surprised that, again, this crude thing could actually give you the right order. Oh, no, there is one more example. OK, again, just one slide. This thing not working is really. So just have this one slide. I'll not tell you anything about this Schödinger-Carpattic model. I'll just tell you the problem. So suppose you have these IID normal variables, and the free energy of the Schödinger-Carpattic model of swing glasses is given by this log of this sum over the hypercube of e to the beta over root n summation gij sigma i sigma j. And then the best known upper bound on the variance of this free energy is n over log n. That's a result that I proved some time ago. But when beta is less than 1, there is a result of Eisenman, Lebowitz, and Ruel who proved that these fluctuations are of order, actually, of order 1 and satisfies the central limit theorem after centering. So here is one result that I got. And as far as I know, there were no known lower bounds. So I could show that for any beta, this free energy is fluctuated with order at least 1 as n goes to infinity. OK, so finally, here are some open problems. And there are many more in the paper. So as I said, the minimum matching on a compact set, I think it's still open. It's not clear how to do that. The second one is a very nice problem that I learned from Christian Houdre. So you take the longest common subsequence problem for random words. So you take two random words of length n, where the alphabet is of fixed size, let's say a binary alphabet, maybe, or alphabeted four letters. And you choose the letters uniformly from the alphabet. And look at the longest common subsequence. So you choose one subsequence of the other so that they exactly match. And the length of the longest common subsequence, what Houdre and Umit Islak, who is a student of Larry Goldstein, I think, they proved the central limit theorem modulo that a lower bound in the variance. They couldn't get the right lower bound in the variance. So that's an open question. In first passage per collision, improve the lower bound and the fluctuations, or any non-trivial lower bound in three n higher dimensions. The matching upper bound for random assignment general cost distributions prove a CLT in any of these problems, in a minimum matching, traveling, salesman, all the other things that I talked about. It's all open. And there are many others in this paper. So yeah. OK. Thank you. Hi, Sylvia. So the work is in the lemma, but it's also in this second thing I showed about getting this total variation using this Helinger distance. So a lot of things can be expressed as functions of independent random variables. And whenever you can express something as a function of independent random variables, you can use this total variation lemma to get this. So this combination of these two things, the Helinger. And then finally, you have to construct the appropriate coupling. So that's also. So it's kind of equally divided this work. So there is some work in constructing the coupling also. Yeah, I know. That's why I was surprised. But somehow, it seemed like that was the first, for wishful matrices, at least, for the sample covariance matrices. They claimed that that's the first place where that was proved. I don't know. I'll have to look more carefully. I checked with the authors also. So it seemed like I don't know.