 You have some announcements for them? No, actually nothing special. So has there been a change in the scales? No, I don't have any. Yeah, I've not checked it. Maybe they changed between the three moments. So good morning everybody. It's nice to see that almost all of you should buy the weekend. Be ready because tomorrow whether forecast are indicating something like minus nine. Yeah, that's what I could see. But in this case it's the best thing to cope with this kind of problem is just to stay inside the building. You stay here, you study, you work and so there is no problem. There is no special announcement to be done. So, I mean, apart that you have received an email about changing the schedule of next week, I think. Also probably we received some exercises from Gregory. I think we'll do the exercises because self evaluation is as important as evaluation done by somebody else. Okay, thank you. Okay, good morning to everybody. So maybe just to repeat or rephrase what Andrea said, you should have received an email from Erica with a short exercise sheet. So I tried to produce some questions that I think should be quite typical of what you should expect for the exam. So let's, I mean, I would strongly recommend you suggest you to try to look at it, work on it and you will get a correction tomorrow on Tuesday. And that will be also useful for me to have feedback from you. I mean, whether it's too easy, too difficult, whatever to more or less gauge the level and write an appropriate exam. Okay, but try to make it seriously. Okay, so, and also you will also, you will also actually receive, I mean, two links, I mean, with two references by myself and my colleagues at YAMIN. Okay, these are not, of course, the only source of information that you should have about it, but at least I know what is inside these papers. And I know that they will be useful. So one concerns this more or less the description of this maze model and this third order, I mean, this phase transition that we want to describe here. So this is one of the two papers. And another one actually concerns the large deviations, very brief introduction to it that I gave on Saturday that I will briefly recall it. So, right, so this is what we were talking about on Saturday. So basically, we were again looking at this, this maze model. You remember this quite simple interacting species model. And the question is whether one can compute the stability, the probability that the system is stable. And we had indications from Maze and collaborators later on that if you look at the probability, this probability of stability as a function of alpha or instead as a function of one over alpha. What you see in the large end limit is that there is a transition between a phase which is always stable at low interaction strengths while it becomes stable when alpha, sorry. It's stable, excuse me. So it's stable when alpha is small. That means one over alpha is large. So in that region, the probability that the system is stable is one. But on the other hand, when alpha is large, that means w equal one over alpha small, this system becomes unstable with probability one. Now, the nice, and that was the motivation, but the nice thing is that this probability is precisely related to the distribution of the largest eigenvalue of some matrix models. And these matrix models are actually called these Gaussian ensembles. And usually, they are parameterized by a number, a real number beta. So they are called the Gaussian beta ensembles. And for such ensembles, the joint law of the lambda i, so that means the joint law of the eigenvalues, which are real numbers here, is known explicitly and is known under this law. Okay. So again, I mentioned I started with this simplest example, or at least the most natural example I should say it's not the simplest, but it's the most natural, certainly. In particular, in this context, which corresponds to beta equal one, which is the GU ensemble. Okay, so that's the case where you take a matrix which is real symmetric and you put random numbers, Gaussian or numbers in it, independent ones. And of course, because it's real symmetric, the eigenvalues are real and the joint law of the eigenvalues is written there. Now, an interesting fact, an interesting property which actually was already known by, noticed by Dyson, is that this joint law can be interpreted as the Boltzmann weight of charged particle systems by interacting via this logarithmic Coulomb interaction. We perceive one with a minus sign here and confined in an harmonic, in an harmonic way. So this is what is usually called as the Coulomb gas Coulomb because this corresponds, this logarithmic interaction corresponds to the Coulomb interactions in 2D. So here, my particles live on a line, so they are confined on a line, and on top of that, they fill this confining potential. Now, we saw, I explained to you that as a result of the competition between these two terms, between this confining potential and these repulsive interactions, the density turns out to have a finite support. So that means that in the large n limit, the particles are confined on a given interval. And for n large, we've seen that the density, if I look at the density rho of lambda, this is just 1 over n sum over i, sorry, delta of lambda minus lambda i average. Well, this goes to a limiting profile, and this limiting profile is as a support on the interval minus square root of 2 plus square root of 2. This is known under the Wigner-Schwami circle. So when n goes to infinity, this goes to this 1 over square root of pi, square root of 2 minus lambda square. So that's Wigner. This is called Wigner-Schwami circle. Yes? This term. So this term actually tends to confine the particles in the middle of the trap. So in other words, if you don't have this interaction term here, then this confining potential really favor the configurations where all the long dies are close to zero. So this term, I mean, if you think eventually what will happen is that the system will try to minimize its energy. And if you only have this confining term here, while all the particles would sit at lambda i equal to zero, because they all want to, all the particles want to minimize their kinetic energy. Now, this is not possible. I mean, this is not possible. This will cost a very huge energy to the system because of this repulsive interaction. And remember that I insisted on that in these matrix models that the two eigenvalues don't want to sit close by to each other. And there is a so-called level repulsion phenomenon, which can be viewed as a logarithmic interaction. And the competition between this confining potential and this repulsive one effectively results in the fact that the density has a finite support. So all the particles will be spread over this interval minus square root of two plus square root of two and given by this Wigner's semicircle. Yes, but at this level of description, it's a little bit, I cannot really describe it. One needs to go one step further. So that means that one has to write an equation for this row of lambda. When you write this, the equation for this row of lambda, then you see that mathematically, I mean, the only possibility is that you have, you don't necessarily have a finite support. But there is no, as far as I know, there is no simple physical argument at this level that tells you that. So that's something that really depends pretty much on the type, yeah, on the type of interactions and the type of potential that you have. That's true. So changing the measure on your matrices, you can, okay, you can change essentially this potential here. Okay, so this one is quite universal. I mean, it's quite universal and is really, yeah, this repulsion that I mentioned, but you can actually tune this potential. And it might happen that if the, so if you change the potential in general, you will change the shape of the density. So if your potential is not sufficiently confining, it might be that they are widespread. And the limit is basically when this, this potential here goes logarithmically. Basically, this is this log that you have. But this would drive me a little bit too far, but these are very interesting questions in the context of our empty, of course. So my point was here to try to say something about lambda max. So of course, I mean, there are many things, many interesting other things to say about this, this matrix models. I want to focus here on lambda max because this is a lecture on extreme statistics first. And on the other hand, because in our model, we have seen that this is really the object that we are after. Now, because of that, because of the fact that we have a finite support here, then what I said, what I tried to convince you is that this lambda max, when n goes to infinity, will almost surely converge to square root of two, which means that the probability that lambda max is, say, smaller than w, when n is very large, okay, so that just to make a connection with this quantity here, will be indeed as this form with a wc, which is precisely square root of two. Is that right? Right, because I'm saying that lambda max with probability one essentially when n goes to infinity, the distribution will be a simple delta function and the cumulative distribution is just a step function. Okay, so already at this stage, we can understand the origin of this transition here, and we can pinpoint now the exact value of this wc that may found, which is exactly square root of two and which corresponds with the edge of the support here. Now, we could say more, and we want to say more, and what we said before yesterday, or not yesterday, but Saturday. Essentially, I first discussed what I call the typical fluctuations of lambda max. What I showed you is that using a similar argument, and using the argument that we had constructed from the IID case that we got from the IID case, we were able to show that basically the typical fluctuations are such that lambda max is of that form a n, which is here square root of two, plus bn, which we estimated to be n to the power minus two-third, times a random variable. And this random variable, okay, I propose you to write it as square root of two times some chi here. Let's call it chi beta because it in general depends on the index beta that you have here. And what I showed, I mean, what I explained to you is that the typical fluctuations, so now if I look at the p, the cumulative distribution of this random variable, so chi beta here, that should be only asymptotically true when n is large. So chi beta here is a random variable that does not depend on n anymore. And this p of chi beta, this is what I call f beta of x, which is the traceridum distribution, okay? Now, we'll come back to this traceridum distribution, but okay, I told you that it can be written in some cases as the solution of some differential equations. It has some very nice non-Gaussian tails, exponential minus mod x cubed on the left, exponential minus x to the power three by two on the right. So it's clearly a non-Gaussian distribution, quite asymmetric also. But what I sort of tried to suggest to you or to explain you, to tell you, is that in fact these typical fluctuations are very important, and if I have time at the end I will show you that this traceridum distribution actually is very important in many models of statistical mechanics. But for our purpose, that means to understand this transition, if I want to understand the nature of this transition, I need to know more about the fluctuations and I would like to say something not only about the typical fluctuations, but about the large fluctuations. So typically I would like to say something about the case where the deviations from square root of two are typically over the one. So one question would be what's the probability, for instance, that all the eigenvalues are negative. That means what's the probability that lambda max is smaller than zero, for instance, here. This is a perfectly well-defined question. I mean, actually here w does not stop at zero. I mean, it's well-defined for any values of u. But of course, when you start to ask or to probe these large deviations, you quit, you leave the regime where the traceridum distribution applies and you enter into the large deviation. Now, of course, I didn't have time really to do a proper course on large deviations and that's not really the purpose here. But nevertheless, I tried to give you some basics about large deviations and that's what I will discuss this morning in detail, in more detail, say. So now I would like to understand what the large deviations are and that means that I would like now to understand the regime where essentially lambda max minus square root of two is in general of order one or say of order some powers of n, which is much larger than minus two-thirds. So that means that lambda max is typically, in general, that could be of order minus two-thirds plus epsilon with epsilon's strictly positive. Here I will essentially just look because this is the interesting regime. I will look at the case where lambda max minus square root of two is over the order one. And so I illustrated this concept of large deviations on a very simple model which was this coin tossing problem. Yeah, so I just make this small detour. As an example, I looked at this coin tossing problem where you have, again, a head or tail with probability with just the head tail with probability half. And I was just looking at nh, which is the number of tail and which still can be right as a sum of the IID random variables. Sigma i equals zero or one. One if h and zero. It's a bit cryptic, but in principle I did that in detail last time. So now if you look at the distribution of nh is equal to m. So let's say if you plot it. So you can compute that explicitly. Let me plot the log of pH, the log of this quantity as a function of m. And what we saw, of course, is that this is centered around the mean value, which is n by two. On average, you will have n by two heads. And now you have some distribution around this n over two. So we know that in general, I mean, the typical fluctuations will be gaussians. But I convinced you that there is actually a more general regime, which I called the large deviation. So in general, what we saw is that basically this p, sorry. Sorry. So if I want to, if I look at this probability here, so that's the log of problem. Excuse me. So what we saw last, what I showed you last time is that this probability actually has a large deviation form, which had this form here. And we could compute explicitly this function phi. Phi of c was basically this entropy. At this entropy form, c log c plus one minus c log one minus c plus log two. It has a minimum, sorry, a minimum around n by two. So if you plot log of this, basically this is minus n times phi of mn. So what you would see essentially is you can compute that explicitly. So you would essentially get something like that. And what you find is that locally here, you have a quadratic behavior. And this quadratic behavior tells you that locally you have something which looks like a gaussian. Now the gaussian approximation is very good if you plot it very close to the top. But it gets worse and worse when you go to the tails. So that's the quadratic approximation. So it took me some time to remind everything, but I thought it could be useful. I mean, otherwise I was feeling to lose you just from the beginning. Is that fine? So now what I want to do is the same kind of analysis, but for the distribution of lambda max. So that's something, of course, much more complicated. And one thing which is also quite different compared to the case of the sum is that if you look at the large deviations on the right of the typical value on the left, they are actually quite different. And this can be understood physically. Now we show you why and how later on. But let me first, sorry, quote the results and I will then, what is going on? So this P, you remember, is a probability. It's not a cumulative distribution. So now I really want to give you the result for the large deviations for lambda max. So let's go. So for this, I will look at the density. So up to now we were looking at this guy. So to make contact with this, I prefer to look at the PDF. So that's the DDW of that. Now, how does it look like? So there are three regimes basically. Well, there is the first regime that we already understood, which is the regime where essentially lambda max minus, sorry, W minus square root of two is of order and to the power minus two. So that's the central regime. And that's basically the, maybe it's good to have a picture at the same time. Similar picture, but for lambda max. I just keep this for comparison. Let me do a picture of the distribution of lambda max so I could plot it. Maybe it's better. So again, I like this drawing here. I'm looking at the density minus square root of two plus square root of two. And now I look on the, I plot on the same graph. I plot this PDF here. So that's DDW of P of lambda max smaller than W. So there is a regime again, which is around square root of two and which we have described and which is basically this guy here. OK, and it has a width n to the power minus two-third. And this is the traceridum regime. OK, so that regime actually we know already. And this is basically the, so this is traceridum. OK, so traceridum here. So this is the derivative of F beta. And the good scaling variable is actually W minus square root of two. And then you need to multiply by n to the power two-third. And you also need to multiply by square root of two. Since it's the derivative, this is square root of two and to the power two-third. OK, is that fine, this form here? It's not the form that I, that, OK, I probably never wrote it in this form, but is it OK? All right, so why do I write it this way? Because we know that lambda max minus square root of two, I already told you is of that form, is one over square root of two n to the power minus two-third chi beta. So you see that chi beta, I can again write it as square root of two times n to the power two-third times lambda max minus square root of two. OK, and that's the variable that admits the traceridum. OK, so that's this variable here. OK, so now the question is what happens on both sides. So let's first look at this regime here, where essentially you look at what happens there. So there is, of course, this regime, the two regimes that are discussing in fact are smoothly connected. They don't need to be OK. I'm reproducing the same stupid, OK, of course they do not. It's basically like this, OK? So we have this regime here. So this is the left-right, left-large deviation. Now, what is the form of this, of this scale here? Well, it has a very different form, and also different from what we have seen here, although it has a bit of flavor of this. And it actually reads like this at the leading order. It's exponential of minus n square times some function phi minus of w. And plus there are terms of order n, which I will not write. So that's basically for w minus square root of two should be of order one. And w is smaller than square root of two. So that's really this part here. So roughly speaking, you have here a probability which decays extremely slowly, and which is of that form exponential minus n square times something. It's very, very small. OK, so this, I will explain it to you a bit later. But at the moment, just notice that this is different from it, OK? So what is the interpretation of this phi minus? Probably, I can already say it now. You remember that I can interpret this cumulative distribution as the distribution of the, as a partition function of a Coulomb particle in presence of a wall, OK? So now you are pushing these particles here, OK, with a wall which is there. You are pushing it. And because you are on that left side, you are really pushing all your particles here. Now, what you, this phi minus, n square phi minus is essentially the energy that you need to put all these particles on the left of square root of two. And if you think a little bit, you remember that the energy is the sum over ij of the sum of log of lambda i minus lambda j. And this term here, you see, I mean, n squared, it has actually n squared term. And that's this n squared that you see here, OK? So the energy that you need to put, to push all your particles on the left scales like n squared, OK? Because you really need all of them to rearrange. And that will really imply a strong modification of the Wigner-Sperma-Circle law. That means the density when you push the wall here will not at all look like the Wigner density. And the energy cost for this global rearrangement, because of this n squared term, that means because of the n squared scaling of the energy, again, all the particles, it's a kind of mean field model here, right? I mean, all the particles are interacting with everyone. And so you have n squared, and it's the number, simply the number of interaction terms is basically n and minus one over two. So that's scale like n squared over two. That's distance squared. Fine. Sorry. Yeah, it is within the exponential. Yes, yes. So that means that you have, this is the lowest order. And then you will have a term which is like n times, OK, phi minus one if you want of W, OK? And then there will be the next term which actually will be over the log n, many k machine. But you can actually, in principle, I will not mention that, but of course, I mean, of course, this is only recently that people have been able to really compute all the series here, but we know the next order. But for the sake of the discussion here, I only need this term. Now, what happens on the other side? So that means, yes? Yeah, so phi minus is some function. It's an explicit function that can be, in this case, one can compute it explicitly. Now, the physical meaning of it, n squared phi minus, maybe I will have a picture, a bit more clearer picture, a bit later, right after that. n squared times phi minus is the energy that you need to push your particle on the left or square root of two, OK? So why is it so? Because you have to remember that this, you remember, I mean, I already mentioned that, that this cumulative distribution, I can actually interpret it as the partition function of your Coulomb gas, but in presence of a wall. So now, when the wall is, if w is smaller than square root of two, of course, I mean, the density cannot extend like this up to square root of two. So something much happened, I mean, must happen there. The particles need to, I mean, the configurations, typical configurations will look quite different. And that's basically the energy that you need to push all these particles to the left of square root of two. Now, again, if I have this idea of pushing particles, if I want to have lambda max, which is much smaller than w, then square root of two. Now, on the other hand, if you look at the probability that lambda max is much, I mean, is really far away from square root of two, then the situation here is quite different because what essentially will happen in this case is that to have one particle which is much on the right of square root of two, I mean, the typical configurations will be such that n minus one particle which sits, will sit inside this Vignard-Vignard-C, inside this Vignard-Somme circle. So most of the particles will be actually between minus square root of two and plus square root of two, and only one particle will be very far away. And this actually costs you much less energy because you only need to have one particle which you just get. You see, I mean, you have particles which like to sit in there, but then if you want to have just lambda max much larger than square root of two, then you only need one particle to be much further from the others. And the energy that you, the costing energy to do that is simply over the n. We'll see why in a minute. And so that means that, so in other words, it's much easier. I think that this you can more or less feel, I mean, simply because of this repulsive interaction. And as a matter of, I mean, as a consequence of that, the energy that you, the costing energy here turns out to be exponential minus n times five plus of w. And here, okay, it depends a little bit on the model, but okay, we just, the small over n here. Usually you will get order one term in fact. But let me be conservative. Is that clear? So again, that means that you have now something like that on the other side, yellow. So you have exponential minus n squared on the left and exponential on, exponential minus n on the right. Okay, so this is the right tail and this is the left tail. Okay, so now, now we are, we are almost there. I mean, if we want to describe, to describe properly this maze transition, we are almost there because we can now really look at what this transition is. Okay, so that's the result of a rather technical analysis and I will just comment a bit on the picture later on. Well, no, I mean, they are quite different for, okay, there is no, I mean, the only, okay, these problems, these two problems are completely disconnected. The only reason why I chose to, this example is to basically show the, I mean, to fill a bit what large deviations are. So I look at this problem here, very simple problem and I showed that typically for this problem, that means where I look at the sum of IID random variables in the large end limit, I will have, so I am considering a specific random variable which is the sum of IIDs and in the large end limit there will be typically two different regimes, one which concerns the typical fluctuations which we know from the central limit theorem are typically here of order square root of n. So I am back to this problem, sorry. Okay, so here you will have, you see, I mean, both agree very well and that's the Gaussian regime and now, as I said, you will always have at least two different types of regimes, one which concerns the typical fluctuations and the other one that concerns the rare events or say the large deviations, that means when you are very far away from this n over 2 here and at a distance from it which is much larger than square root of n, say typically over the n in this case, okay, so I was computing the probability that I have no, I mean, for instance, that nh is equal to 0 or nh equal to 1 charges the same. And then I showed you that there, you see, I mean, the Gaussian approximation is very bad. In a sense, I mean, there is a wide difference between the Gaussian approximation and the exact result which turns out to be very well described by a large deviation form which typically would have this kind of form, okay? So the only connection that one has is that indeed there is a central regime which here, so if I look at, one should compare this curve here and that curve there, okay? So this one is in log, this one, okay, I plotted it in linear linear, but the meaning is basically, so this part here is described, so the typical fluctuations are tracy-rhythm here, they are Gaussian there, so these are the two, so the equivalent of the central limit theorem, Gaussian approximation is basically a tracy-rhythm result. And then we know that there will be large deviations and large deviation regimes when you are far away from these typical fluctuations, okay? So that means when you leave this Gaussian here, when you leave this tracy-rhythm regime here, you will have different kinds of scaling. So in this problem, the typical scale is square root of n. In that case, the typical scale is n minus 2 third. And when you go far from these values, on this scale, you enter different regimes. Now, an important difference between these two models is that the regime on the right and the left, so here on the right and the left, they are quite different, simply because the two problems are different and there are strong asymmetries in this problem that you don't have here. And the result is that you have something which is exponential minus n times some function, so which is a bit reminiscent of this, if you want, okay? I didn't write it, but it's exponential minus 5 plus. I mean, just because otherwise the figure will be a bit messy. But this is this form. Those exponential minus 5 plus of w is similar to this guy here. Now on the left, you have another scaling, which is exponential minus n squared times another function. So again, this is a bit similar at this one, okay? Instead of having n, you have n squared. So these are the similarities and differences between the two models. But intrinsically, they have nothing. I mean, they don't speak to each other. I mean, they are completely disconnected models. Again, I just show that, took it as an example to introduce large deviations, but not more than that. Is it clear? Okay, so now what I want to stress is that now we have a nice characterization of this may transition. And in fact, you see that, okay, so this, yeah. So that's fine. I think now this I can erase. This we understood more or less. I think I hope you have understood that. Now, so that's for the PDF, okay? So that means if I want to look at, okay, this is P of lambda max minus w, smaller than w. So this was, okay, this is the PDF. That's the way I want to write it. Now this is what I, up to now I called fn of w, the cumulative distribution. So essentially, fn of w, of course, has this fn of w. We know, I mean, there is w equal to square root of 2 here. And it will have this kind of form there. So now I know a little bit more of that, because of course it converges to one, okay? So maybe it's, I wrote it for the PDF because I think it's slightly more natural. Now for the purpose of what I want to say in a minute, it's better to write it what it means for fn of w. So if I am on the left here, that will not change anything. So that will be just exponential minus n squared pi minus of w for the regime one, okay? And I don't want to repeat everything. So that will be regime one, regime two, regime three. So that will be this one in regime one. Okay, regime two is okay, also okay. I mean, this is just f beta of square root of 2 and to the 2 third w minus square root of 2. So that's the second regime. And then I have a third regime. So here, one has to take care a little bit. fn is actually the integral of this, okay? So again, here this will not change anything at this, at this, at this order. But if you look at that, when w is, of course, when w is larger than square root of 2, the leading term is 1, right? And then there is a small correction which is given by that. So this is 1 minus exponential of minus n pi plus in the third regime. Is that clear? Okay, so it's just another way. Okay, now I think that from a probabilistic point of view, it's probably nicer to look at the density. So that's why I wrote the things like that. Now I want to come back to physics. And it turns out that I showed you, I mean, I told you that the nice fn of w has a nice interpretation of a partition function of your system of particles in presence of a wall. I want now to just translate these results for that, okay? So when you take the derivative with respect to w, I mean, you see that, of course, if I take the derivative, there will be a phi prime, okay? If you take the derivative of that, there will be a term which is minus n squared, phi minus w times exponential minus n squared. But this will introduce some additional powers of n squared, which in any case are somewhere there, okay? There are log n corrections that I didn't even write here. So I don't care about this. Here this is obvious that if I take the derivative, I observe this. Now, the less obvious statement is this one. But again, we know that fn of w is one here when w goes to infinity. And so that's how we should think about this guy. Is that okay? Okay, so now I really want to understand in detail what this actually tells me in the language of... So this fn of w, you remember, is really related to the stability, probability of stability, right, in the main problem. So to do that, I just want to remind you that this fn of w is essentially the partition function of a Coulomb gas in presence of a wall. And that means that the natural quantity to look at is not fn strictly, but it's more the free energy. Okay, so there is some constant that I don't want to discuss here. And this is like this, right? I wrote that already on Saturday. But basically, okay, so you have your rate, your statistical, I mean Boltzmann rate. And you want to have the probability, again, I remind you that this is the probability that lambda max is smaller than w. So the probability that lambda max is smaller than w is the probability that all the eigenvalues are smaller than w, okay? And so that's, again, the Coulomb... So that's a Coulomb gas, log gas, in presence of a wall at w. So, okay, just... I will come back to this part of the backboard in a while. But I just wrote that to tell you, again, that this quantity here, fn of w, is a partition function. And the natural object is actually the minus log of fn of w. Okay, in principle, I have minus 1 over beta log of fn of w is the partition function. Now, usually, in short-range interacting systems like ISing or short-range interacting systems in general, this free energy scales like n, the number of particles. Here, the situation is a bit different because you have long-range interacting systems. You have a mean-field model. And it turns out that in the large n limit, the good quantity to look at, I mean, the typical free energy doesn't scale like n, but instead like n squared. Okay, so that's... Oh, sorry, sorry. Okay, so I want to look at this. It's a free energy of my system. Yes, okay. So, okay, I said it, but I agreed that if you look only at this part of the backboard, it's not very clear. If you look at it, indeed, so in general, the free energy, I mean, if you have short-range interacting systems, think about the ISing model, for instance, that's true that one would naively expect that the free energy is extensive and is proportional to n. Now, when we have... When you consider long-range interacting systems, this is usually not untrue, and the scaling with n might be different, right? And here we are in such situations, right? We have these n particles. They interact with the n each other particle, and plus this is long-range because we have a log. So it's extremely strong, in fact. And that the matter of fact, it turns out that in the large n limit, the free energy doesn't scale like n, but like n squared. So that's something that I can say here. Now, in fact, this is also written here, right? Because if you look at this part here, okay? And if you look at this quantity, so minus log of fn of w, okay? Yeah, maybe I should have... If I make the beta, let me introduce a beta here because otherwise... It's true, I mean, in the sense that this... There's a kind of miraculous that these functions here does not depend on beta, but okay. Yes, I mean, beta is a parameter. I mean, if you want to interpret it... I mean, okay, we write it because it has an actual interpretation as an inverse temperature, which is not really... I mean, it's not really... It's just take it as a number. Oh, that's the same. No, no, that's the same. The same here? Yeah, yeah. That's the same everywhere. Yeah, yeah, that's actually quite nice. This is the same everywhere. Can you read it? What I wrote it? Yeah. So now you see, I mean, if you look at this quantity here... So let's call it, say, f... This is the free energy, so okay, I want to call it a small fn of omega. Well, from that result, now you see that there is something quite nice because you see that if you take log of this and divide by n squared, then obviously this will just give you phi minus of w. Now, if you take log of this and divide by n squared, then what you will actually get is something that goes to zero when n goes to infinity, okay? So in other words, if you really look at this fn here, which is this free energy, fn of w, when n goes to infinity, let's forget about the regime 2, which is extremist, I mean, which does not really exist here, I mean, for the purpose of it, but essentially when w is larger, okay, when w is smaller than square root of 2, you see what I get. Basically, I will get phi minus of w, which is non-zero, which is computable exactly. Now, if you look at what happens on that side, so you have log of 1 minus something which is very small, minus epsilon is basically minus epsilon, so it's exponential minus beta, and you divide it by n squared when n is large, and that will be obviously going to zero exponentially. So on that case here, on that region here, that simply goes to zero. I don't discuss too much the central regime, which is not at the moment which I've not really... I don't want to study that too much in detail. But so you see that there is actually a phase transition between a phase where you are pushing your system. So this is a pushed phase. Maybe I will come back to this in a minute. While this is... So pushed because you see you are pushing the wall far inside the the edge of the density. Now, on that side, you remember that this actually comes from the fact that there is a single particle that actually goes very far away from the rest. So that's actually a pulled phase. I will comment a bit more on these two things here. But now you really have a phase transition in the thermodynamic sense. Right? This model actually exhibits a phase transition between two phases where the free energy is non-zero and where the free energy is zero as you process the value square root of two. Okay? Now, if you want to classify the standard classification of... So this is a phase transition. So that's actually the May transition. So the transition that May found has actually a very nice thermodynamic interpretation in the classification of the standard classification which is U2RNFest of the phase transition. You usually classify the order of the transition by looking at the degree of discontinuity of this quantity. So that means that for a first order of phase transition, the first derivative of FNW is discontinuous. For a second order of phase transition, the second derivative of N of W with respect to W is discontinuous and so on. So here, so that means that if you want to understand the order of the transition, how this function behaves when W is close to square root of two. Okay? So in other words, what you get, I mean, so you have these two phases, if you really look at FN of W, you would have something like that, right? So there is here square root of two. Then you have your function phi minus of W and essentially what you see and what you can show is so you want to know how it does... And the order of the transition is controlled by the exponent about the behavior of phi minus of W close to square root of two. That's the standard. So here it turns out that the behavior is cubic. So what you can show is that this phi minus of W behaves like square root of two minus W to the power three. There is some constant here that you can even compute, but we don't care about it. So here that's really constant here is that you have a square root of two minus W to the power two. So now if you look at... Okay, if you look at the first derivative of phi minus, on that side is zero, on that other side is zero. If you look at the second derivative, it's zero there, zero there. And if you look at the third-order derivative, then you immediately see that the third-order derivative of that guy goes to square root of two, but being negative has actually a finite value, why it is zero on the other side. Okay? Yes, you can engineer it, but you need to work a little bit. Yes, you can engineer it. That is true. But this requires quite some work because it turns out that this cubic behavior is extremely robust. So that means that if you change the potential, for instance, okay, except if you fine-tune it in a very specific way, but typically you would observe a cubic transition. So this is quite universal, but here again I really want to say that to mention that the third-order derivative of phi minus or the free energy if you want, I wanted to keep this. Yes, I still want to keep it. So what I want to say here is that this is the third derivative of d3 dW3 of phi minus W. So that's the first derivative of the free energy is discontinuous because of these three, of course, is discontinuous at when you cross W equals square root of 2. Okay? So that means that it has two different values whether you are slightly above square root of 2 or less than square root of 2. And this means that you have a third-order phase transition. So let's let's try to understand this now a little bit on a sort of graph to recapitulate all this. It's okay. Eventually I will just erase this because, okay, let me write this. So now I want to have I have now, I claim that I have now a quite nice description of the phase diagram of this main model. Okay? So to see that I will actually plot so that's the phase diagram of the transition by May. I want to have this phase diagram so let me May's model. I can have it as a function of alpha. So I will have these two axes. So please take care. This here, this is alpha. Okay? This axis is 1 over n. Okay? So what I said here is that so essentially this is really what I am describing here this part here is actually the limit when n goes to infinity and we say that there is a transition here at alpha equal square root 1 over so I'm talking about alpha so this is 1 over square root of 2 and so there is a transition between say here a phase an unstable here at low alpha at low alpha yes so here we have a stable phase okay when alpha is small and this somehow corresponds to a weak coupling regime now on the other sign here for large alpha you have an unstable an unstable phase but of course this transition only exists at n goes to infinity so that means only on that line okay? So you have this these two regimes basically but remember that we had for n finite we had an intermediate regime which was the case where w of minus square root of 2 was the other n to the power of minus 2 so what does it mean? Well that means that of course if you look at your system for finite n there will be a crossover and okay so that would mean that for finite n you will have a third intermediate region here which crosses over which is a crossover if you want between the stable and the unstable the unstable phase and this crossover is described by the tracy rhythm so that's very typical of what happens when you have a phase transition effectively phase transition only exists in the thermodynamic limit that means when n goes to infinity strictly and for finite n you will have a region which is usually characterized by some finite size finite size scaling some intermediate region that connects the two the two fixed points somehow if you want one is the stable one the other is the unstable one and this intermediate regime is precisely the tracy rhythm regime so now and I will just finish with this I just want to understand a little bit more so that's a very nice picture I think of the maze I mean maze transition, it's quite simple and you really see first that this stability and stability is governed by these random matrix models and more than that so that means okay in other words maybe another picture that we can sort of have is that so if I look at this distribution this table this time as a function of 1 over alpha okay and just reversing it so we had this let's do it as a function of 1 over alpha otherwise you will be a bit misled so we have seen that we have a specific so we have this kind of phase transition right so that's the what happens in the large n limit n goes to infinity from 0 to 1 now if you really look at what happens for finite n of course you will have a different you will have a kind of I should write it not in dotted lines but maybe really like this so that's what happens for finite n and that means that this step function is smeared out over a certain scale and this scale is precisely this one here right and this scale is of the order n to the power minus 2 so okay so that means that here you are again in a regime that corresponds to so this is a traceridum regime so if you really look at what happens close to square root of 2 you will get something which is described in terms of traceridum distribution now here that corresponds to the weak coupling limit so that actually corresponds here so alpha is again so here I am stable and here I am slim okay now stable actually corresponds to what is known I would like to call it a strong coupling regime because it's a sorry I like to call it yes the no that's fine so I like to think about this regime as the weak coupling regime so alpha is and it's stable on the other hand when alpha is large then you are in this in this unstable regime questions no no no it's not by Svasa it's just that here I just okay here I plot it as a function of alpha and here I've plotted as a function of 1 over alpha okay yes yeah I mean indeed I mean that's what happens yes again don't forget that this is a model a simple model right if you look at if you are interested in if you look at the paper about maze I mean then many many people have worked on that and the conclusion that you can draw from this model I'm not claiming that they are super universal and that they describe all the the ecological model that you will have in the world right so but if for this model the statement that you made is correct yeah okay if you don't like it I mean just if you don't like it I just can plot it as a function of 1 over alpha okay it's not very pedagogic what I did indeed yes okay so let's let me correct this okay that will be just 1 over alpha you don't need to change so much that will be square root of 2 that will be stable and that will be unstable so that otherwise yeah I'm sorry about it so again large coupling you are unstable this means this and strong coupling sui coupling sorry means 1 over alpha big and you are in the stable and this regime that I try to explain here corresponds to this one okay is that fine so now maybe I maybe one comment at this stage is that so we have here a third order phase transition as I said I mean in statistical physics this is not very common nevertheless it turns out that in high energy physics people actually have observed this kind of transition and in fact the first who did observe this transition was in the context of QCD in the context of Youngman's theory on the lattice and the first actually who observed this third order phase transition was actually in a paper by Gross and Witten in the 80s and followed up by Spentawadia who also predicted this third order phase transition in this context of Youngman's theory which is completely disconnected from this amazing stability of RMT but it turns out that there are links to explain but all these models actually with the third order phase transition are actually I mean closely connected to each other yes so maybe something that I wanted to do I mean maybe it's just let me take five minutes to explain this a little bit more by saying an additional word to that trying to understand this stable to unstable transition so let me just rephrase a bit what I said here ok so that's why I wanted to have it here so this let me come back a little bit about this exponential minus n squared exponential minus n because I think they are quite important to understand and I just want to comment on the fact that here we have a Coulomb gas with a wall ok so what does it mean and what does it mean when you really cross this transition here with the wall ok so there are essentially three kinds of of of situation right because you will see Coulomb gas with a wall so let's start with the simplest guy so we know that forget the forget about the wall ok forget about the wall at the moment we know that all the particles they like to sit on this big nurse from a circle here ok and now you want to add a wall to these guys so let's look at the situation where w is quickly larger than square root of 2 suppose that you are putting a wall here well obviously if it's very far away from square root of 2 well the particles they don't care about it and everything I mean they will just behave exactly as if this wall would not be there now I told you that if you want to have a large particles that is that sits at the wall then basically what you would have to have is essentially that to take one of these particles and to take it close to the wall but the rest of the particles is pretty close to this phase to what happens with that wall so and if you want to have one particles that really escapes from it then you really need to pull the charge out of this ok but there will be essentially only one particle which which fills this wall so that's again what I call the pulled phase now let's look on the other hand at the case where instead at equilibrium you would like to have your density like this yeah so here I mean you see I mean first you put a wall if I want to compute fn of w itself ok so there are two things on this figure if you want to compute fn of w this is the partition function of the of the wall of the Coulomb gas in presence of the wall but if the wall is very far away from square root of 2 it will have no effect right because all the particles with probability 1 in the large n limit minus square root of 2 plus square root of 2 ok so that's one thing and that's why actually if you remember fn of w in that limit when w is larger than square root of 2 the leading term is 1 the leading term is 1 because it's simply this integral here is exactly the same value as the value w equal to infinity that's why it just there were corrections to it now what are the corrections and this is now I ask the question it's very far away from square root of 2 sitting at w what will happen is that you take one of the particle one of these particles you just put it here very far away but the rest of the n minus 1 particle they are just a quietly between minus square root of 2 and plus square root of 2 ok so you are if you really want to have a large lambda max you really need to pull this these gas here and one particle will actually just go away from the circle and all the other particle will sit all the other one they will sit gently here right I don't want to disturb them because otherwise it will cost me a huge amount of energy so I just take one of them and the energy that I need to do that is basically proportional to n why is it so because these particles is interacting with the n minus 1 other particle and basically that's the cost of energy at one particle which interacts with n other ones that's the origin of this of this n ok so the energy to do that is proportional to n now let's look at the other situation instead now you put a wall which is here now this is really another story so you push this wall ok and now at some point ok you will cross square root of 2 and then you end up in that phase so obviously here the particles they cannot stay there and the resulting density will be quite different and the true density will more look like that there will be some accumulation so the density will diverge close to the wall simply because you are pushing it you pushing this gas they cannot stay there so that the equilibrium density without the wall ok so you are pushing it and as I said in order to accommodate to the presence of the wall the particles will need to rearrange drastically and because we know that they are all interacting with each other and the interaction term is of that form sum over ij log of lambda i minus lambda j there are just all the particles will be involved in these changes and that will essentially cost you an energy which is proportional to n square ok so that's obviously a case where instead you are pushing your gas and this is the pushed phase and the energy needed to do that is proportional to n square ok so that's why here the probability will be typically of the order exponential minus n square while here it will be of the order exponential minus i and then in between of course you have a critical you have a critical case where you are at the place where the wall is exactly at square root of 2 and that's basically the triceuridum point so triceuridum is actually exactly like a critical point if you want right so at square root of 2 so you put the wall at w equals square root of 2 and this one is critical so you have the critical you have the pulled phase and you have the pushed phase ok now it turns out that this triceuridum distribution is sort of universal because it has more or less the same universality as critical phenomena ok we know that at critical phenomena in the vicinity of a critical point you only depend on the sort of microscopic behavior of the system and that is why many system somehow display this triceuridum distribution now of course if you begin to change the interactions if you begin to change specifically the interactions things are different and you might observe different kinds of transition nevertheless still in many cases you also observe third order phase transitions and there are recent series of work by Pierre Paolo Vivo for instance and co-workers who have been investigating this the generality and the generalization of this third order phase transition we have also generalized it to many many problems and so what is quite nice here is that somehow you can grasp a little with the origin of the universality of triceuridum which is connected to the universality of phase transitions yeah yeah that's a good question so you can of course so Coulomb gas in higher dimensions so for instance ok there are several things that you can consider so one is the one that you are discussing is the case where the true Coulomb gas for instance so now you really take particles in the plane and they are interacting via the true Coulomb gas interaction the two dimensional say the logarithmic but in 2D which is the true one now in that case it's nice for this model because it's connected to another set of random matrices the simplest one actually so if you take just random matrices without any symmetries say they are complex because it's the nicest example then in that case so you fill your matrix and by n without any symmetry just random Gaussian numbers real and complex independent and then you look at the distribution of the eigenvalues they will sit of course in the plane they are not real anymore they will be spread out of the complex plane now it turns out that the law of these eigenvalues is precisely the law of the what is called the one component two dimensional one component plasma that is the position of the particles interacting via the 2D 2D Coulomb gas you can ask similar questions and you will find again similar so for instance you could say I mean in 2D you can say okay what's the probability that they all so here I mean you say I want to confine them on a segment I mean on the half client yeah exactly so you insist on having them on a certain radius and then you will observe similar things so you will also observe in fact the third order phase transition except that there you lose actually the tracy rhythm there are some other laws there sorry yeah it's again a third order phase transition so this has been discussed by Vivo and co-workers and also Fabio Coulomb now something that you can also look at this is something that we have done more recently is you can also look at the true one dimensional Coulomb gas so instead of taking the log you really take the modulus of Xi minus Xj and you will also again find another third order phase transition and with other type of laws this is not tracy rhythm there this is something related to it I mean not related I mean the same flavor but unrelated it seems that it's quite robust yeah so maybe I will end with discussing some some applications of tracy rhythm okay it's certainly a very long it's a quite long story okay maybe I will sort of start with the application that I prefer because it was also this so okay so to say this tracy rhythm distribution basically was discovered in 94 and then okay it was there but it was like something a bit mathematical okay and then it turns out that at the end of the 90s in 1999 actually a beautiful result came out from three great mathematicians by Dave Ten Johansson who actually discovered that this this tracy rhythm distribution actually arises in a completely different problem which is in principle also of mathematical nature which has to do with the random permutations of numbers okay so I just want to show this application because it's quite striking I don't know whether you will like it but and from that on people realize that this result actually has many applications in various models of statistical mechanics and in particular in the Karbar-Parisisang equation and that's that was really the starting point okay so this let me just finish these things on RMT with that but that's really a and then started a very nice story I mean where people really realize that this tracy rhythm is extremely robust and appears in huge huge variety of problems we recently showed that for instance it appears also in the problem of free fermions some question probably that I don't know how it's briefly mentioned but it has been found in finance it has been found in wireless I mean all these people who are looking at this wireless communication problems so it's extremely nice and I just want to finish with this so again in one of the I mean in the reference that I in one of the two references that I gave we did that in other okay you will find a more exhaustive list of applications now here I want to discuss the sort of I think the the problem which really started this long story and this has to do with the longest increasing subsequence of a random permutation so the idea is so what do I do I call by random permutation so I just take n numbers basically there will be factorial n permutations of these numbers and I consider the simplest case where all these permutations are just they have a uniform weight okay so each permutation arises with the probability which is one over factorial n okay so this is my set this is my set of random numbers if you want you just say okay I just look at all the permutations and then I want to do some statistics on it now this problem actually is quite very old is due in fact initially to Ulam from the 60s, 70s and let's illustrate it on a simple example so let's take for instance n equal to 8 so this is this LIS problem right now it's very famous so and you draw one permutation a one random permutation okay so for instance one element I will just call it say sigma will be this one so that's one of these permutations 3, 7, 4, 5, 6, 8 so this is one of these permutations and now I want to consider the first consider increasing subsequences so what is an increasing subsequence well increasing subsequence in this in this thing here so you just select so you just scan this permutation and you select series of numbers which are ordered by I mean increasingly so for instance increasing subsequence let's let's try to see one of them so for instance I would have say this one 7, 8 is one of them so this is one probably can find another one yes there is another one for instance I would have 4, 5 so I need to increase so 1 is not possible, 2 is not possible 6 is possible and if I want I can have 8 sorry I can also take the 3 this is yet another one so this was the blue one 7, 8 this one 4, 5, 6, 8 but of course I could have only 4, 5 I could have 4, 6, 8 I mean all of these subsequence are also acceptable and then you found another one you want now to take the 3 this one I like it so this is the 3, 4, 5, 6, 8 ok now 3, 4, 5, sorry 5, 6, 8 so you will have a certain number of increasing subsequences and now you look at the longest one ok so in fact the last one that you gave me is the longest one so you look at the longest the longest sequence and this is the longest increasing subsequence no because they are permutations they are not possible because they are permutations so all the numbers here will be different ok so I take this I take the numbers between 1 and 8 or 1 and n in general I just permute them, shuffle them this is actually of course related to some card shuffling problems and this eventually so that means that they are all different ok but nevertheless you can still have several longest increasing subsequence ok so it might be that ok here I have not investigated it in details but this is the there might be several longest increasing subsequence and you look at the lengths of it ok so so this is the longest one so this was a very old problem in the longest increasing subsequence and you look at these lengths ok its lengths is here LL n equal to 8 so here is L8 is 1 then is 5 ok and then you you just select another random permutation and you do the same exercise and you look at the lengths so LN will be a random variable and you ask what's the probability distribution of this of this LLN ok so more generally LN is basically the LN is the length of the LIS and you want to know the statistics of this guy now that means that you would like to know the statistics of it that means you want to compute basically this quantity here I call it p small n capital N that will be the probability that this LN here is say smaller than some value so it's the length longest subsequence that you can construct from a given permutation so this problem was there for a long time and people had partial results for instance it was known that LN in people knew this its asymptotic value when N goes to infinity in fact Ulam himself knew this result and he had shown that this was 2 square root of N this was I think Ulam had this so also this problem was sitting a bit I mean no one had very nice ideas of how to compute that or say something about it now in 99 these three people I just mentioned because well first there are three of them brilliant mathematicians and they showed the remarkable result that came in 99 they actually showed that this LN here if you look at it in a correct way is actually related to Tracy Widow so what they showed is that this this guy here this p and n in the large n limit you can write it as and it's actually in this case this is f2 so that's actually 2 square root of N divided and again you have a n 1, 1, 6 and this is actually f2 is a Tracy Widow so that was quite a surprise because at that time of course there was no eigenvalues there I mean at least it's not very clear to see random matrices although they are but there are some random matrix behind this problem but more than that in fact in statistical mechanics that can be related to it and I will mention one of them it was known that this LIS problem which is pretty mathematical is actually related to a directed polymer problem and that's how it started and that's why people became extremely interested in these results so it turns out that this random sequence I mean this permutation problem is actually related to a polymer model well yes so that means that the average value will be 2 square root of N in fact that means even here given that the fluctuations so it's a bit like in random matrices in fact what it means that is that if you take very long and very large and if you take just take at random a random permutation and you look at the longest increasing subsequence which will be of this order 2 square root of N now there will be some fluctuations around it the fluctuations are of that order and to the power of 1 over 6 and these fluctuations will be given by tracy rhythm so in other words what's the translation of that good question so if I look at LN as a random variable when N is large then typically this is just a deterministic random number so that would be our AN if you want 2 square root of N and then there is a BN and here BN is just 1 over 1 sixth times a random number and this random number is just chi2 that means this is the tracy rhythm so this is a random variable which is distributed according to tracy rhythm distribution for beta equal 2 for the GUE ensemble okay it's not is it clear? so indeed in the large N limit what I said my answer was that when N is large this one is much larger than this so that means that the fluctuations are much smaller than the deterministic part and so typically if you take a random permutation the length of the longest increasing subsequence would be typically 2 square root of N okay so now why is it interesting beyond the purely mathematical framework now it turns out that there is a nice relation to a polymer model so how does it work and that was actually already known and that's called actually the Hammersley pass process Hammersley was a mathematician of the same time the same time as Ulam so let me rephrase this longest increasing subsequence so for instance we had N equal to 8 so let's consider a squared grid of size 8 so we'll have 1, 2, 3, 4, 5, 6, 7, 8 we have 1 okay so this is 1, this is 2, this is 3 this is 4, 5, 6, 7, 8 and this is again 1 here 2, 3, 4, 5, 6, 7, 8 so let's consider one permutation just for clarity I just consider this permutation sigma which is the one that I gave you so that was 3, 7, 4, 5, 1, 2, 6, 8 so what does it mean actually I mean that means that sigma of 1 is equal to 3 sigma of 2 is okay so this is sigma of sigma of 1 this is sigma of 2 this is sigma of 3 etc okay so let's just draw these numbers so sigma of 1 is 3 so it's here sigma of 2 is 7 it's there sigma of sigma of 3 is 4 so I will be here and then I am in 5 and then I am in 1 and then I am in 2 and then I am in 6 and then I am in 8 okay very good so these are these points here and I want to construct a path and the path actually will be directed in the sense that I can just so suppose that I'm looking at the typical path so they have to be oriented in the sense that I teach time steps so if I start from this point for instance I can only go to the right and then up so that means that here for instance I could go here and then once I am here the only point where I can go is this one and I am done so I am just looking at the directed path so let's add for clarity lines and let's look like one of these increasing subsequence so for instance there is this subsequence that we mentioned one of them so I want to represent one of these increasing subsequence so for instance one that I wanted which is nice for instance is this 1, 2, 8 so I have 1 let's do it to be this one 1, 2, 8 this is one of them right so I have this point here so this is 1 is there actually and then I move to 2 so I move here and then I move to this one so this is my sequence here and then okay I will add a starting point and an end point so this is the additional construction so I have a starting point here and an ending point there so in other words that would be here one of this one of this path okay that's fine so I put these points and now the criterion is that I am doing a random walk which at each time steps can only go first to the right and then it has to be it has to go it has to go up so it's a directed path okay so that's the path process and it's directed this blue path here has a directed polymer it's a polymer that starts here and there and it is directed because of the reason that I said okay so you need all the path that you need to go always in that direction so it's always basically east and north okay I can just go in this direction so you can never go down and you can never go west so if you think a little about it all these paths are in bijection with the increasing subsequences of your permutation okay so that gives you one one type of polymer so in other words the increase in subsequence is in bijection with a lattice path which is a which I want to see as a polymer optimal polymer how do I do that now I will say that to each of these polymers I attach an energy and the energy is basically the number of points that you have touched that's my model that's my polymer model so now the energy of the path of the polymer is basically the number of points that you have encountered the number of points that you have touched and now obviously the question that you ask is okay I want to look at the optimal energy that means the energy which has the highest energy the path that has the highest energy and the path that has the highest energy is precisely the longest increasing subsequence and the energy associated to this path is the length of this longest increasing subsequence so in other words the energy of the optimal path is just ln and that tells you that the energy let's write it this way so that's that was one of the connections if you want that made people realize that this was indeed a very nice and breakthrough result here so now you would say that the optimal the optimal optimal path is basically the path with highest energy okay that means the path with highest number of points it is in bijection then with the longest increasing subsequence by definition now the energy of this optimal path is precisely ln okay in other words that tells you that of the optimal path is actually precisely ln and it means that the typical fluctuations of ln are described by Tracy Weedon here beta equal 2 at that moment people didn't know but this is the special geometry somehow here that we are considering you see I have a polymer that has one fixed end and that ends up here at also at one fixed end so this is called the point to point geometry but so you see then that we have a directed polymer model then based on universality arguments one can probably think that this specific result Tracy Weedon results fluctuations so that the specific results for a specific model will actually be still valid for a wider class of polymer model and that was indeed shown later on that this is indeed the case so that means that people then have looked at various kinds of directed polymer models some of them could be solved exactly Johansson who played a very important role in these developments he actually solved another model for which he found a very nice connection to RMT mainly in terms of Wishart Laguerre ensembles now it turns out that this model okay I didn't show it but you probably know that or I don't know if you know it but usually these directed polymer models they can be mapped onto that stochastic growth process within the Kpz universality class and so this model actually essentially can be mapped onto a model which is known under the name of the PNG model polynuclear growth model which was eventually solved by Schpaun in 2000 and that was really the beginning of wide explorations I mean a wide field of research around these problems centered around the traceridum okay so I think that my time is okay now I can take some questions but that's the only thing that I want to tell you a very nice I don't have a simple way to get it although this might exist but I don't know it if it exists I don't know it in any case it's not trivial even this term is not trivial so now okay it corresponds to the edge of traceridum but also the Wigner semicircle but of course here there is no random matrix okay thank you