 So we found the second lecture, I guess. No? Is that OK? That's fine. Yeah, it doesn't. Yeah, I will fix it otherwise. OK, so good afternoon to everybody. Here is basically the short summary of the previous lecture. I mean, essentially, everything that I told you yesterday is written on the blackboard. So again, I remind you that I was considering the cases of IID random variables, meaning independent and identically distributed random variables with some density p of x. And I was mainly interested in x max, the largest values of the x-size. And what I want to compute is basically the distribution of this guy in the large n limit. So I first made some heuristic arguments to evaluate the typical value that stage. That stage typical was not extremely precise, but I tried to convince you that this gives an estimate of the scale of either of x max or of its fluctuations. And I convinced you that I gave you this argument that mu n is given by this formula where x star is the edge of the support of p of x, which might be infinite. And then the second step was to really compute, I mean, to be more precise and to have access to the full distribution of x max. So namely, here, it's cumulative distribution, the probability that x max is less than m. And I showed you that there is a fairly explicit formula in this case, which is written there. And now, of course, we want to study the large n limit of this quantity. And I also explained to you that if I don't do anything, meaning if I just take the large n limit of this quantity, then I would just obtain a sort of theta function. I feel that. OK, let's try to do that maybe. OK. So if you don't do anything to m, essentially, if you just take the large n limit with the finite x star, for instance, you would just obtain a pure theta function, which is a bit uninteresting, which does not give so much information. And instead, if you want to get some non-trivial large n limit, then what you need to do, basically, is to first center your distribution and then scale it. That means that you will write m as an plus bny. And then you can have a hope to obtain something non-trivial if you take the large n limit of this quantity. And indeed, this is what happens. And so at the end, the problem of this, I mean, the statistics of extreme value for IID, random variables boils down to solving this, answering this question. That means finding these numbers a n, b n, and the full distribution g of z, such that this guy has a good limit, which eventually is given by g of y. OK, so it's better to talk about g of y here. I think yesterday I was mostly using g of z, but if you don't mind, I prefer to use g of y today. So what I told you yesterday, without any proof, and again, I will not enter into proofs, but I told you yesterday that there are three distinct universality classes, which are indexed by this number r, which can take three values, one, two, or three. And these are this Gamble, Frecher, and Weibull distribution. So yesterday I gave you this, the form of this distribution I didn't discuss too much what these a n's and b n's are. So that's basically what I want to do today, and go to these three classes and be a little bit more specific and give you a sort of full account of what are really these three distinct universality classes. So let's start with the first one, and let's look at the Gamble universality class. So again, I told you that this corresponds to the case where typically x star is infinite, and p of x, so this pdf here, has decay, sorry, much faster than any power low. So that means typically an exponential decay, more generally one could think about something like that, minus x to some power alpha for instance, when x goes to infinity. Yeah, okay, so what I... So when x star is infinite, this is really the, these are the conditions. Now there is a subclass of cases where x star might be finite, and in that case, okay, probably I can just, let me tell you, otherwise it will be too much confusing. So what is this case? So typically suppose that you have a finite x star, and then p of x is extremely singular, and it is basically of that form. It's not of a power low type, and it's basically of that form. So just to be clear, in the case where the finite, where there is a finite support, the typical universality class is instead viable, but viable corresponds to the case where it vanishes like a power low. So here it vanishes with this highly, in this highly singular way. So in some sense it's faster than any power low again in all the regimes. So, yeah, sorry, this is the x star minus x. So in this case, we said, I already told you that g1 of y is just this double exponential, which is this gumbel. Now this quantity is a CDF, it's a cumulative distribution, and if you want to know the PDF, the probability distribution is just, let me call it small g1 of y, and that's just ddy of g1 of y. And if you take the derivative of this quantity, then you see that there is just a pre-factor, which is just an exponential that comes here, and eventually the form is the following. So you see that essentially if you look at this at this distribution on the right side, so if I would like to plot it, if I would like to plot g1 of y, sorry, let me plot this function here, I'm not sure. Here you don't see anything already? No. Okay, so I need to erase this, but keep this in mind because it will come back in a minute. Right, so what is this g1 of y? So I just want to insist on how it looks like, and insist on one aspect as a function of y. So basically for large, if you look at the behavior for very large y, then basically this is very small compared to that. So at large positive y, this is essentially something which decays exponentially, so it's something relatively standard. Now on the other hand, if you look at what happens when y is negative, then you see that this quantity here becomes gigantic. So that means that when y is negative, this function actually vanishes extremely rapidly, much faster than any power law. So it has typically this shape. I never know whether the maximum is around zero, maybe it's zero itself. So we have some exponential decay, but on the left side you see it's extremely, vanishes extremely rapidly. It's a sketch, but I just want to insist and this is basically, you see, this is something that, so if I want to write it, so you have this term here, but then in terms of, you see it's something that decays extremely fast. Now this is the distribution. Now the question is what about a n and b n? So it turns out that in this case, a n is exactly given by mu n. So that means that you remember that we had this typical estimate, we had this estimate of the typical value, we should say, and that means that a n itself is given by this function here. So the mu n that we had before is precisely this a n. You see that to obtain a non-trivial scaling limit, I need to look at the distribution close to mu n. So in other words, if I look at the distribution without, so if I look really at the pdf of the maximum without doing any rescaling, this quantity, so dm of the cumulative distribution, so that's the pdf of the maximum, then it will have, you see I mean it will have this, so it will be centered around a n, that's what I'm saying here, and it will have some fluctuations around it. So the fluctuations in this case will be gamble, so that means you will have something like that. You have something like this. So that means that you have some probability to be, so I'm centering the distribution around this, but I have some probability that the minimum itself is actually much smaller than its typical value. So it corresponds to some kind of quite atypical value where you are quite far from the typical, so from the average. Okay, so you have one sample basically where instead of the maximum, instead of it should be around here, and it turns out to be just there, and there is this gap, so it's just, so it's negative because you count it relatively to a n. Is that right? So this is for a n. So again, this typical value that we had now has a more precise meaning, and this corresponds to the mode of your distribution. What about bn? So there is an explicit expression for bn. Let me give it. It's a little bit, so we could also have somehow guessed it, but at least it has a nice interpretation. So bn is the ratio of two integrals. The first one is a kind of mean distance from a n, then you divide it by a n to plus infinity dx, p of x. Okay, so if I want to interpret it, you see that bn is typically the distance between xi and a n, given that there is a single value between a n and plus infinity. Okay, so it's a kind of conditioned mean value. So that's the mean distance, average distance with respect to a n, given that there is basically a single particle, a single particle, a single value xi in this interval between a n and plus infinity. That's more for the interpretation of that. So this formula is, we have these two formula here. Now let's try to see a bit more concretely what it means on a specific case, where you take p for p of x. So let's try an example, and we will treat it in two ways. The first one, which is very naive, in a sense that you could look at this problem without knowing anything about all this theory, and then we will check that all this formula, which are given by the theory, actually matches with what you would naively find without all this knowledge. So let's look at this very simple case where p of x is a simple exponential and positive, and it's zero otherwise. So let's write the distribution of the maximum in this case. So in this case, if I write here, is this still okay? The list, yes, it should be here. Oh, thank you. Well done, please. So if I want to look at the distribution of this guy, we have already computed it, right? We know that this is just the integral from minus infinity to plus infinity. So here this is just cut to from zero. Okay, I wrote it. My formula was like this. So let's start with my formula. So that's the formula we had. Now p of x is zero on the negative side. So this interval actually only runs from zero to plus m. And if you just evaluate it, what you will find is that this is just basically one minus exponential minus m to the power n. Okay, for p of x equal to that. This is just the integral from zero to plus m of this very simple guy. So now you want to take the limit when n goes to infinity or not. So basically the identity that you want to use, and in fact, which is at the heart of all the asymptotic analysis that you eventually do here is the following. So there is, if you take this, what you want to use is the following, is that so we want to use this simple identity, which is that if you look at the limit when n goes to infinity of one minus alpha over n to the power n, okay? Well, you probably know that this is equal to exponential minus alpha. Given the form that I have here, I mean, it's quite tempting to use this because I want to look at the larger limit of this. And the claim is that I want to use that formula. So of course it's not exactly of that form, but okay, I can just rewrite it in the following way, which is that I will just write as one over n minus m minus log n to the power n, agreed? I just divided by one over n multiplied by one over n, but this multiplication by one over n, by n, excuse me, I just wrote it exponential of minus minus log n. So it's a bit convoluted way to write something simple, but now I claim that I can just do easily the larger limit because now instead of looking at m, I will just make a shift here. So I will just look at log n plus y. So let's look at this quantity. Now this quantity is what? It's just one minus one over n exponential minus y to the power n, just anything. But now I can use this formula there, okay? So in other words, now I'm ready to write that limit when n goes to plus infinity of f1n of log n plus y, you see it's just exponential of minus the exponential of that. And this is my function. This is my Gumbel law. So this is just this g1 of y, is that clear? So you see that now, so of course I didn't know anything about these formulas, but nevertheless, now I find an, an in this case is just log n, and bn, bn is just one, okay? Yeah, absolutely. So that's what I was somehow trying to say in words yesterday which I can now demonstrate more precisely with formulas. Indeed, if you take this formula here and take the large n limit as you say, I mean it's a bit pointless. But what I need to do is to first center my variable and eventually rescale it. So instead of looking at m, I'm looking at a new variable which I called before an plus bn y. So here it's just m equal log n plus y, right? And now here you see I mean there is no more ambiguity. You agree that on that formula, but here taking the large n limit of this doesn't mean anything, or at least although here it's quite clear what you would obtain. So it's really what I meant yesterday is that if you don't do anything to this variable m, you won't obtain any relevant results, okay? So you really need to this, this shift to make this shift and eventually this rescaling. That means look at f1n of an plus bn y. Here an is log n, bn is 1, and you just eventually take the large n limit and that's what you get, okay? So now we can check. So here you see I mean you can do this computation without knowing anything about this nice formalism, okay? So now I just want to check that if you apply this formula, then you indeed find an equal log n and bn equal 1. That's pretty simple in this case, so let's try to do it. So it's a check now. It's a check of this formula. So how does it work? Well, in fact, the log n, you remember we already computed it yesterday because we already computed mu n. Let's redo it again, but... So now we can check with the formula. So an is such that... So let's first compute an, which is such that the integral of an to plus infinity dx exponential minus x is equal to 1 over n, and this is just such that. So that tells you that exponential of minus an is 1 over n and just an is just log n. So that's quite simple here, okay? So that's in agreement with this observation that I made here that we did there. And now we can do the same for bn, okay? So I think we can do it also for bn because it's quite simple. Is it simple? Yes, I think it is simple. So let's see how it works. So bn now has an explicit formula. bn is just the integral from log n to plus infinity dx of x minus log n exponential minus x. And this, of course, this integral here, you could, of course, simplify this formula because we know that this is just 1 over n, okay? So this guy, I wrote it like this so that we understand really what we are doing. But in fact, there is now an additional simplification, right? Because this is just 1 over n in general by definition, right? That means that bn is just this divided by 1 over n, if you want. So eventually you can just evaluate this integral and what you find is the following. So when you evaluate this integral, what you get is the following. So let me write the explicit result. You get exponential of minus log n times 1 plus log n minus log n exponential minus log n. And there we know what we get is just 1 over n. So basically you see that the log n exponential minus log n here just cancels and you get 1 over n divided by 1 over n, which is just 1. Okay, so that also matches perfectly our result here. So I just did this computation to show you that, of course, the two methods agree, and also I wanted to show you it's not a proof that you will always get the gamble here. But at least on this simple case, you can see easily how this gamble from naturally emerges. And it really emerges mathematically from this pretty simple formula. So that's the first class. Maybe there is, I would like to discuss another case briefly. Yesterday I didn't have time to discuss too much the Gaussian case, and I think you will do that during the class. But I just want to mention the nice application of what happens in the Gaussian case. I will not do the computations. I will do them, but there is quite nice effects to be mentioned. So I discussed the exponential. Now let me discuss the Gaussian case. Okay, so I will just take random variables now b of x, which is of that form. So it's 1 over 2 pi. So 1 over sigma square root of 2 pi, exponential minus and sigma is real positive. So you have formulas here, and from this formula you can compute an and bn. So an is basically like mu n yesterday. So this is the computation that you will do later during the class. In fact, the leading term is this guy, which I showed basically yesterday. Now I just want to make a comment. Of course, this is the leading term. You have seen that here basically an is equal to log n is exact. In the case of the Gaussian, actually this is not the exact result, but there are, this actually corresponds to an expansion for large n. So this is the leading term, but there are corrections to this leading term. And actually it turns out that the corrections are a bit more involved to compute. And they have the form typically that I show here, which is minus sigma over 2, and I will just comment on that. It's of the form log log n divided by square root of 2 log n. So, okay, and then you have term that are sub leading compared to that, right? So then just want to mention that you see that here first, the typical maximum grows quite slowly. Square root of log n is really a very, very slowly growing function. But what you also see is that the corrections are actually very, I mean, they decay within, but they decay extremely slowly. What it means is that when you really analyze concrete data, you have very strong finite size effects. And that actually, when people started to develop all these series and then when they really began to apply to analyze concrete data, real data, they were facing really these problems of finite size effects. And that means that they need really to be treated carefully, right? Of course, I will not enter into these details, but it's good to know that that for IID, for extreme statistics of IID random variables, there are extremely strong finite size effects. That means that if you do simulations, and maybe you will have the opportunity to do some, you will see that finite size means finite n, OK? I have the, that means n here. So that's for an. So instead of having a logarithmic growth, you have a square root of log n, so it goes very slowly. Now, what is sort of more, even more striking is the behavior of bn in this case. What you find is that bn actually behaves like that. So it's a computation and probably will not do it. So if you want, you can do that, of course, but I mean, this can be an exercise also, but OK, it's a little bit more involved. What you find is that bn actually decays when n goes to infinity. And in fact, it decays like one of our square root of log n. And you see that it means that this goes to zero when n goes to infinity. So in other words, if you look at the distribution of the maximum, it will be something which is, so if you look really at the distribution of pmax, so that's dm f1 of m, it's almost a delta function, in fact, OK? So it's roughly speaking, it will be picked around sigma square root of log n, and it will have a width which is extremely, extremely small. But it's almost a delta function. And in fact, for large n, it will be roughly speaking, it will be a delta function. So that's not completely intuitive. I think that if you take just Gaussian numbers and if you just look at the maximum, well, essentially its value is almost deterministic. And it's actually sigma square root of log n with probability 1 when n goes to infinity. OK, it has a quite nice application which I just want to mention. Although, and it has to do, it's a nice application regarding the collection of random workers, although I will talk later a bit more about random works, but let's look at the following problem where I consider a collection of particles or Brownian particles, OK? And so it's an application of that result which I find quite nice and quite simple. So you just consider n Brownian workers, n Brownian motions, n Brownian particles if you want to be more physical. And you suppose, I suppose that so they, OK, let me just denote it by xi, xi of t, and such that they all start at zero at the same point just to make the, at the initial time they are all sitting at the same position, OK? So, and I'm just doing that. So I'm looking at this. So they are all starting at the same position, OK? And so they will do something like that, right? They can cross, of course. So you have all these particles. Now, what I want to consider is the trajectory of the top path here. So what I'm looking at is basically the envelope if you want of that. So here I will just look at these trajectories, OK? So I look, I'm looking at the leader. I mean, someone people look at, I mean, talk about the leader. I mean, this picture is quite clear. I'm looking at the guy which is leading the trajectory of the leader, OK? So of course, there will be crossing so that will not be always the same worker which is at the top. Now, what I claim is that if you have a large number of guys, from that result, what you can infer is that the trajectory of the leader is just completely deterministic, OK? There was no more so fluctuations. So how can I see that? So I just define, so let's, let's go, let's go through the, yeah, exactly. So, yeah, exactly that, yeah. So I define this guy, X lead of T. So it's not a particle if you want, but at any time T, this is the max, OK? So the lead, the label, I mean, the label of this leader, I mean, sometimes will be the worker number 12, sometimes it will be worker number 13, sometimes number 1, sometimes number 99. But it's, OK, I don't like too much this color because I cannot see it very well. This one, OK? So indeed there will be some change at the lead, but I always focus on this guy. So it's quite natural question in many, many, many problems, right? There are a large number of workers, particles, whatever, pedestrians, whatever, and you want to look at the trajectory of the leader. So I want to look at the max of this guy. Now these are Brownian motions, OK? So all these Brownian motions, you see, that means that if I look at a given time T, OK? So let's look at a given time T, OK? So I just take a picture at time T and I look at the statistics of these guys. Now so, of course, they are independent, OK? These Brownian particles are just independent. So if I want to say, if I look at time T, suppose that I want to look at the distribution of X lead, this is just the maximum of independent Brownian motions, OK? Now these Brownian motions, all of them actually, if I look at the distribution of the PDF of X size, these are Gaussians, OK? So then I will be able to apply this result, OK? So what I'm saying is that the PDF of X size of T is basically, I mean, this is exactly what I was writing here. So maybe I can just explicitly write it. So it's a Gaussian, OK? So it will be P of X and T. It depends on T now. And it will have this form. So it will be sigma of T square root of 2 pi exponential of minus X square divided by 2 sigma square, sorry, sigma square. And now sigma square for Brownian motion, OK? So I forgot to give one characteristics, is that all these Brownian particles, they have a diffusion coefficient D, the same one. I suppose that they have the same diffusion coefficient. And then the sigma square that I have here, this sigma square is just 2 DT, sigma square of T. It's just 2 DT, right? This is just the mean square displacement of the Brownian motion, OK? Now you see, I mean, you can just apply this formula here, all this, what is written here, but replacing sigma by square root of 2 DT, OK? And you see, again, because of this sort of concentration of measure that I depicted here, what is happening is that the maximum is essentially a delta function, OK? That means that when n is large, the maximum is, so from that result here, what we know is that, for large n, so what we see is that the leader, X leader of T, which is the maximum here, so which identifies here with the maximum, so it will have a typical value, so it will be typically of this form, so which is sigma of T square root of 2 log n, Bn times a Gamble random variable, OK? Now this Bn is just the sigma of T divided by square root of 2 log n, and then there is some non-trivial random variable here, which I was, OK, which is basically Y, which is distributed like a Gamble, OK? But in the large n limit, you see that this guy is going to zero, OK? So that means that for large n, the leading term is obviously this one, and this one is just a number, just a deterministic value. Now sigma of T is what? We know it's just square root of 2 DT, so OK, let's go on, so that we just give something which is square root of 2 4 DT log n. So again, you see that this is of the form, I have an effective diffusion coefficient here, so that's square root of T times some effective, square root of D, I'm sorry, effective diffusion coefficient, and this effective diffusion coefficient, that it's now proportional to log n. OK, so you have actually, if you look at, so essentially what is happening, if you really look at this problem, if you look at the leader in the very large n limit, well, basically it's just something that goes like square root of T in the large n limit, it will just be just square root of T, so it will be really deterministic, you will have very small fluctuations around it, and you can even predict the diffusion coefficient, which goes logarithmically with the number of particles. So that's quite nice application because it tells you that basically, if you look at this assembly of Brownian particles, when n is large, if you look at really at the border of it, so sure, this is time, this is the position, then basically the envelope here is just deterministic. So that's one application, and that actually has nice application in some search problems, predator-pre dynamics, these kind of things. There are a series of nice papers by Cedar Edner and co-workers Paul Krapipsky on these kind of things. Is that clear? OK, so that's for the Gamble case, so now let's move to the other cases. I will be a little bit more brief on the other cases, but I hope you could see how it works. Well, here indeed, I mean, large n means, well, you cannot really... OK, strictly speaking, when n is very large, everything goes to infinity, so if you want to have a well-defined large n limit, you need to risqué the things properly. It's clear, right? If n is very large, then you will occupy a very... One thing that you can... Maybe we can see this later. One way to give a meaning to... When you want to take some thermodynamic limit starting from these kind of single particle models would be the following, just a side remark, but one way to have a non-trivial large n limit is to look at the following. And we will encounter this problem a bit later. So, for instance, you can say the following. So there, they all started from the same point, and already when n is very large, this does not mean anything. So what you do instead is that you take your particle, say, between minus l plus 2 and plus l plus 2, and you put them randomly, uniformly, OK? So you put n particles, and then you look at the limit when n is large, l is large at a row with fixed density, OK? So if you want to take a large n thermodynamic limit starting from this kind of model, that's the procedure that you have to take. So that would be the thermodynamic limit of this model if you want to have something non-trivial. We will see, actually, maybe not today, but probably tomorrow, such an example. And then you can apply the result. There is one thing if you... I mean, one cavity, if you do that, which is the following, which is that you still have... Now, if you take this model, so the question that is asking that now we are looking at is that these Brannian workers, they are all... There will still be Gaussians, OK? Since they don't start from the same point, they won't be identical, OK? So that means that instead of being exponential of minus x square over 2 sigma square, this will be minus x minus xi0, OK? So it's slightly different from what I... OK. Slightly different from what I said before because this is a case where you have independent random variables but not identical. Still, you can do things. And I will show an example how you can do something interesting by such a method probably tomorrow or the day after. But that's true that in this model that I discussed, if you take blindly the large n limit, everything is blows up. But still, this is nice to remind that and I think it's not completely intuitive that the top path is actually completely deterministic. OK, so now let's move on to the second class, which is a Frechier universality class, which corresponds to the case where you have a power low decay of the parent's distribution, OK? So here also you can actually work out quite simple examples, but... So I'm not sure I want to cover a lot of examples actually there, and there is already one that we saw, so yeah, I will be quite brief here, but still. OK, so this is the Frechier and this corresponds to rho equal to... OK, so yesterday I told you that this corresponds to the case where x star is infinite. And now this is the only case. I mean, for Frechier you need really to have x star infinite. And then in this case, p of x for large x has a power low, which is of the form, OK, just x to the power minus alpha minus 1 when x goes to plus infinity and alpha, of course, is positive. So in this case, there exists an and bn. So again, as before, there exists an and bn. I will say what these an and bn are, such that basically if you look at this quantity here, an plus bny, if I take the limit n goes to infinity of this, then this goes to this function which I denoted by g2 of y and g2 of y was this guy, right? So it's basically exponential of minus y minus alpha when y is positive and is 0 otherwise, OK? So now I need to tell you what these an and bn are. So in this case, an is 0, fairly simple. Now bn is precisely mu n in this case, OK? So bn, so again, in that case, you again see that the mu n, the estimate that we had, the mu n that we estimated before gives you the order 0, if you want, of the estimate of the maximum before the order 0 was an, which is non-trivial. Now here an is 0, so the first non-trivial number to know is bn, and bn turns out to be exactly mu n, OK? So that means that this mu n, I remember, mu 1, this is again this quantity. So one case that one can investigate is the Cauchy case that we discussed yesterday, which was also mentioned this morning in the lecture, in the morning lecture. So if I take, so that's an example, OK? Usually one take this one because actually someone mentioned the Levy stable load this morning, of course. I mean, this one is one of the Levy stable, and this is probably one of the most interesting class of distribution which have a power load, and this is, OK, this corresponds to basically, in this case, this corresponds to alpha equal to 1. Very good. So, well, I mean, we already did this computation last time, right? I mean, and we observed that this was basically given by this, and we have already computed it, so I will not repeat the computation. What we found is basically that, OK, bn is n over pi, OK? So that's the typical scale of fluctuations in this case. They are proportional to n. So you see that before we had only logarithmic behaviors, now we have much stronger fluctuations. I just want to comment on one thing here. So in general, you see that, and this is quite interesting to see. So if you look at the, as before, if you look at the PDF, OK, so you have g2 of here, so now let me just look at the PDF which is the derivative with respect to y of this quantity, OK? So I want to look at the PDF of a Fréchet. So this is just what I call g2 of y, and which is d dy of g, g second of y. Then you see that it has, I mean, it's very simple to take the, if you differentiate it, you obtain alpha divided by y to the power 1 plus alpha, exponential minus, and y is positive, OK? So in 0, there is no problem, right? I mean that this is diverging, but this is actually going to 0 much faster, OK? That's something that goes, so if you plot this function, you will see that it vanishes very rapidly close to 0, but it has this kind of shape. And then if you look at the large y limit, so when y is very large, then basically this is 1, OK? And what you recover is a parallel, OK? And quite interestingly, this parallel, so you have, so this is g2 of y as a function of y, and if you look at the leading behavior here, is alpha y 1 plus alpha. And what is quite interesting is that this power low behavior is exactly the same as the parent's distribution, OK? And this is not a coincidence. The main reason for that is that when you look at the collection of, when you look at the maximum among a heavy-tailed distribution, what is happening is that basically the distribution, it's always dominated by one single guy, and this single guy will have the same power low of this guy, OK? So that's quite nice to notice that, is that this power here is just the same as the parent distribution. The maximum was centered around the end, bringing the... OK, so that's interesting. So what it means is that in the previous case, what we see is that typically there are actually two scales, one that scales, one which really denotes the position of the maximum, and then there is another scale, which is bn, which describes the fluctuations. Now here what you see in this distribution here is that there is a single scale, basically, which controls both the typical position but also the typical fluctuations. So that's essentially the... And that's true, I mean, that's quite different. That's true, yeah, that's of course linked to that. That's true. Is that OK? So that's an interesting remark, I think, to keep in mind. And I guess that will be basically the same, the only thing that I want to say about this guy. In fact, in this case, there is no real simple case that you can work out easily, as I did before, for the exponential case. And I will just move to the third case, which is the viable case. In this case, mu n only describes the typical scale of your... So you see, I mean, that... Well, initially I was describing this mu n as something typical, I mean, so whatever it means. Then you see that now that we are really... That we are making progress and analyzing in more detail the distributions, we are able to give some more precise meaning to this mu n. So we have seen before that this is basically somehow the mean value of the fluctuations in the first case, which is this a n, the mode of mean value, roughly speaking. And now, what we are saying here is that in this second case, mu n is actually something else. mu n is the typical scale of the fluctuations. Okay? Well, it's not the typical scale of the fluctuations. In this case, this is the scale of fluctuations. Yes? Yeah. Yes, actually, I mean, this comment was not extremely... Okay, what I wanted to say, you see here basically that the maximum has the same... has the same tail as the parent distribution, which is quite generic for all kinds of power loss. What I wanted to say, but I didn't explain it very well, is the following. Indeed, if you look at the... In this case, so if you think about... I just wanted to compare the scales that we are computing here with the scales that we obtain when you sum these random variables. Okay? So you see, I mean, let's look at the exponential case, for instance, before. So in the exponential case, we know that if I sum these exponential random variables, they have a finite mu, right, because they are only positive. So the sum typically will be proportional to n. Okay? And more generally, okay. Now, suppose that you would have... I could relax something, which suppose that now instead of having this... this purely... Look at the Gaussian. Maybe the Gaussian is better. Let's look at the Gaussian. So you sum your Gaussian random variables. They are of mean zero, but the typical sum will be over the square root of n. Okay? Now, the maximum, actually, that you obtain in this case is over the square root of log n. Okay? So that means that for Gaussian random variables, the scale of the sum and the scale of the maximum has just nothing to do. That means that the sum is really built from a large number of random variables. Okay? Because if you just take the maximum, you could say, okay, if I take a large number of random variables, basically, I am dominated by a few ones. Obviously, this is not true for Gaussians, right? Because if you add a few random variables, which are over the square root of log n, they will be over the square root of log n, and they will be never over the square root of n. There is a very large gap. Now, in this case, if you take this... this Frechet distribution, if you look at the sum, the sum actually will be over the n. And n here is basically of the same order as the maximum. Okay? Because we see that the maximum is over the n. The sum will be over the... also over the n. And the same holds for... alpha here... for this generic alpha smaller than 2. Okay? If alpha is smaller than 2, the sum will be over the n to the power 1 over alpha, and the maximum will be also of the same order. So that clearly means that in these cases, the sum is actually dominated by one or two guys because they are just of the same order. And the situation is quite different from the simple Gaussian case. Is that clear? I was a bit fast before, but that's what I meant. Okay? So that's a sort of demonstration of what... of the common folklore that we hear sometimes, that when you take a large number of fat tails, random variables, if you take the sum, then typically this would be dominated by a few ones. Here you can really show that explicitly. Okay? So let's move to the third case. This is Vibool. Vibool case. So that's rho equal to 3. So now in this case, we have x star, which is finite. Okay? So that's what I was depicting yesterday. So if I look at p of x, so I don't care too much what happens from x star, but there is a finite x star. Okay? So here it's very important that x star is finite. Right? So that's really... And rho and density here close to it vanishes as a power low. Okay? So it's not this essential singularity that I was describing before as exponential of 1 over blah, blah, blah. Now here I have something which is x star minus x to say the power alpha minus 1. So this is one case that... So for instance, alpha equal 1 is just the uniform distribution. Okay? So in this case, I told you that there exists an and bn. So I mean, okay. Such that while, again, if I take the cumulative distribution of the maximum and if I center it this way, then this limit when n goes to infinity is well defined and it's given by some function g3 of y and g3 of y has this courier shape. Okay? So we will see, I mean, it might appear sound strange. We will see on one example that it's actually not strange at all. Okay? So it's 1 when y is positive and it's exponential minus mod y to the power alpha for y negative. So this is called the viable distribution. So alpha equal 1 is a simple, basically, half exponential. Now, so this I told you already yesterday. Now what about an and bn? Well, obviously here an, as you could guess, is basically x star itself. Okay? So you need to look at the maximum close to x star to see something non-trivial. So here you get, again, an equal to x star and bn, again, is equal to mu n. So again, I should maybe, is it equal to, how did I do? Yeah, maybe, okay, bn is such that, okay, it's essentially the same, but I just want to recall it here. It's not exactly, it's 1 minus mu n. So it's defined in the same way, but okay. Bn is defined like this. So it's x star and x star minus bn. So it's basically related to mu n, but not quite, not exactly the same, okay? So mu n is equal to x star minus bn, okay? That's, but again, you obtain bn exactly as we did before. So bn is such that there is typically one values in the interval x star minus bn x star. So it's related to mu n. It's basically x star minus, let's write it explicitly. In fact, in this case, we get that mu n is x star minus bn. Let's look at one concrete and simple example because we have probably, I already mentioned it, I think briefly last time, precisely to evaluate mu n in that case. You remember that we looked at the distribution which is uniform on 0, 1, and we had found that mu n is 1 minus 1 over n, if you look at your notes. So let's see, let's do as we did for the exponential. Let's do it blindly. I mean, let's try to do the computation as, I mean, in a very simple way. So I will just take this p of x, which is basically 1 if x is in between 0 and 1 and is 0 otherwise. And I want to compute again the same quantity, right? f1 over n, which is the probability that x max is less than m. And obviously here, this is what? This is, okay, so two cases. First, maybe suppose that m is larger than 1. If m is larger than 1, then this probability is 1, okay, because all the values are bounded by 1. So the probability that the maximum is less than 100, obviously is 1. And then if m, so this one here is basically this one there. So that's the idea. And now what about the value here? Well, here, this is just the integral from say 0 to m dx to the power n, okay, because p is just 1 in this case. So, okay, let's just look at the case now, m smaller than 1. And then you see immediately that I will do the same trick as I did before. So if I integrate this, of course, I get m to the power n. Okay, so m to the power n is not very nice. Okay, because again, I want to use this very simple formula that 1 minus alpha over n to the power n goes to exponential minus alpha. So we just do kind of, we just write it this way, right? And I'm sure you will agree with that. It's not the simplest way to write m as 1 minus m, but still it's correct. And then this gives you simply the idea to look at not f1n of m, let's say look at f1 of m, sorry, 1 minus x or y, sorry, over n, because this would just be 1 minus, so 1 minus m, but this is just 1 over n. So now you are happy because when n goes to infinity, this quantity has a nice expression. And this just goes to exponential minus y. So this, of course, holds. I need to have y negative, okay, y positive, sorry. So maybe I should change a little bit. Maybe I gave you the wrong sign here, I'm sorry. Let's check. No, that's fine, that's fine. So another way, maybe I should, so instead of doing that, sorry, so let me do it this way so that I can just, because otherwise it will be a bit weird. So let's do it this way. So if I write the things like that, of course y has to be negative, okay, because otherwise it's y. So this is just this if y is negative, and obviously this is just 1 if y is positive. So what I'm claiming is that now on this formula, I have everything that I said there, okay. So this is g3 of y for alpha equal 1, okay, because I have, this is 1 indeed if y is positive, so this is this guy, and this is just exponential of minus mod y, which is indeed the case, exponential of plus y is just exponential of minus mod y, because y is negative. So that's my function g3 of y. Now, I also immediately check that x star here is 1, so I just checked the fact that an is equal to x star, okay, and you can immediately check also that bn has to be equal to 1 over n, because if you just put p of x is equal to 1, you integrate x, this integral from x star minus bn to x star, this is just bn, and you immediately obtain bn equal to 1 over n, there is no computation. Is that okay? So this is another case, you see where you can apply this nice, I mean you can apply this, again you see the same trick appears, and you can just check the formula. Okay, so that more or less, yeah, okay, there is something that I would like maybe to cover to finish, which is the generalization to the case maximum, because it's pretty nice and it's not, okay, you will not see very often this computation. I've not invented it, I mean it's quite standard, but it's not something which is taught so frequently, so I would like to take the opportunity just to show you how it works. So, yeah, you mean, yeah, so it's, okay, so, yeah, so if you go to, so if it goes to some constant, for instance, you mean, if you go to some constant, that's basically alpha equal one, that's just like the step, okay, so suppose that you arrive at some finite value, then this will be, this corresponds to alpha equal one, finite value, I mean it can diverge, yeah, it can diverge, so it can diverge as, okay, then in that, in this case, so alpha needs to be strictly positive, because otherwise it's not normalizable, and in that case you are, this is also covered by this case. But indeed alpha needs, maybe I should, alpha needs to be strictly positive. So let's comment on the, on the, on this last part that I want to tell you, because it's, I will basically show you one formula, and then just give you the, because, so this was for the first maximum, okay, now I just, I would like to, I want, I would like to generalize this to the case maximum. So again, what I, what I had in mind now is that I, I'm looking at now at these values. So I have m1n, which is x max, and then I have m2n, which is the second, so this is the first maximum, the second maximum, et cetera. So note that, and then there will be a last guy, which is x me, so note that I, here I am, I am working with continuous variables, I mean they have densities, and that means that there are basically no tides, right, I mean no two, never of these two guys will be equal, say differently, the probability that they are equal is zero, okay? Which is not true if you take discrete random variables and you have all kinds of combinatorial complexification that arises, but here it's pretty simple. So let me just write this way, these guys on the line, and then do the combinatorics that I want. So I have say here m1n, I would have here m2n, et cetera, and so here I would like to have my mkn, and then here I would like, I will have mk plus one, et cetera, and here we'll have the last guy, mnn, okay? So I would like, so before everything that I was talking about was about this guy, now I would like to say something about some generic maximum, say this guy, mk. So one way to think about it is simply that I will count the number of points which are below and above this guy, so I have, that means that, so I want to focus on this point here, that's really the guy that I am after, so there are k minus one value here, and behind here I just have n minus k, n minus k plus one, okay? So except that I should take this guy. So I have n points. Now what I want to compute is this probability, okay? So let me introduce this more complicated object if you want. I want to have that fkn, which is fkn of m, which would be the probability that mkn is smaller than m, okay? So mkn, so suppose that you have, so it needs, it has to be smaller than m, so there are several possibilities such that this is true. The first possibility is that all of them are below the value m, okay? But there is another possibility which would be the first maximum, which is above m, and all of them are just all the rest of the points, basically the k minus one, k minus two in that case would be below m, or I could have the possibility that these two guys are above m, but the rest up to mkn are smaller than m, et cetera. So that one states into the following sum of probabilities. So the first term is the probability that, again, all the points are below m, right? So let me just write it this way. So that's just the probability for minus infinity to m dx p of x to the power n. But now, again, there could be an additional probabilities, which are the case where basically m is there. So in this case that corresponds, okay, maybe it's not, this corresponds to the case where m is roughly here, and they are all set below m. Now, another configuration that will contribute is the following where only one is below m, but all these guys here sit below m, okay? So that's the following probability. So I have this probability dx p of x to the power n minus one, and then I have one guy which is above. That means m to plus infinity dx p of x to the power one. And then you see these random variables are just identically distributed and independent, so I have actually n ways of choosing this first guy. This first guy might be any one of these guys, but I have a factorial n, yeah. So now I can just go a little bit below that. So the next types of configurations are such that only two guys are above m, and the rest is below. And so I will do the same here basically. So that corresponds to such a case that I have a sum for minus infinity to m, d of x p of x, sorry, n minus two this time, and two of them sit below. That means m plus infinity dx p of x, but now to a square. And now you see that I had to choose two x i's among n, and of course the pair is indistinguishable, so the number of such pairs is just n and minus one over two. Okay? And then you see how it works. Okay? So generally eventually the formula that you can write, let me write it. So you will do that, and okay the last term that you can do, when you arrive here, so that's the last term that you get. So that means that you have k minus one, which are above m, but mk has to be below. Okay? And that's the last sets of configuration. So let's write it explicitly. So that would be minus infinity to m, d of x p of x to the power here. So I have k minus one, k minus one points, and then there is a set, or maybe you won't see it to the power, sorry, so here this is n minus k minus one, so this is n minus k plus one, here you get k minus one. And then you see again here I have to choose k minus one points among n, so this is just n choose k minus one. Okay? That's fairly simple. A bit tedious to write, but fairly simple. Okay, so you can write it in a more compact way, and which is, okay, I will not do the asymptotic analysis because it's pretty hard, but I mean pretty hard. It's not very hard, but... So I can just write it as a sum of our j from zero to k minus one. Okay? So that basically counts the different terms here. So that will be j equal zero, j equal one, et cetera, up to j equal k minus one. So that means that I am counting the number of variables which are sitting above m, either there are zero or there might be one, there might be two, up to k minus one. So this is that. There is a combinatorial factor here, nj, and choose j, sorry, and then I have this product here that you have. Okay? So let's write it. That's a quite useful formula. It might be useful sometimes. Okay? n minus j, and then the other guys are above. So if I have n minus j, then I am j. Okay? So that counts the number of guys which are above m, and that counts the number of guys which are below m. Okay? So again, now the question is how do you do a large analysis of this? Okay? Of course, it's slightly more complicated than we had before, but still it's doable. And in fact, the result is fairly simple. It is very nice, in fact. And the result is that, so again, we will have three different universality classes which are the same as before, or equal one, two, or three, Gamble, Frecher, or Weibull. And so now you need to rescale properly this guy. And you will rescale it, in fact, in the same way. We do the same a-n's and same b-n's. So that means that you will have a-n plus b-n-y. And this actually goes to a nice function. And this, let me write it explicitly. This is just g rho of y, which I gave you before. And there is a sum here. Sum from j equals 0 to k minus 1. Minus log of g rho of y. It's a bit more complicated formula, but j divided by factorial j. For those of you who like special functions, this can be viewed as an incomplete gamma function of log of minus log of g rho. But I will not comment too much on that. But that's actually quite nice because one can really have an explicit expression for the case maximum. And it has a limiting form, which is fairly explicit. So here, depending on rho, again, rho might be 1, 2, or 3. So either Gamble, Frecher, or Weibull, you have these very, very explicit formulas. I guess I can even end up with a more general result. And then this will close what I would like to say about this IID. So here you see, I mean, this is something about the case gap. Sorry, the case maximum. Or I just looked at the one maximum, or say the second, the case one. But in fact, you have a stronger result that gives you the joint law of the k first. So if you ask what is the joint law of the k first maxima, and this is known, it has also quite nice and simple expression, I will just give you. And that will be. So of course, it's the strongest result that I gave because it contains all the results that I've said, told you before. It's a bit of work to show it. It's nice to know that it exists. Because once you know it, essentially, you can do many things about, I mean, essentially, anything about the statistics of this extreme. That's a more complete, it's a more general result, if you want, about the joint law. The joint law of M1N, M2N, MkN. So again, here I am in a limit where k is fixed and N goes to infinity. So you can, so this is a vector. It's a kind of multivariate statistics, if you want, k dimensional statistics. And this is actually, so what you know is that, so first you need to center and rescale all these guys, so I center them, so I look at M1N close to AN and I normalize it, and I do the same for all of these guys. So I just take these guys, normalize them, and when N goes to infinity, this goes to some limiting form, vectors, w1, w2, wk. And in fact, the joint law of these guys is known. And what is it? Well, it looks like IID random variable, so it's a product measure, so it's 0 of wk. Let's now write a product from i equal to 1 to k of j prime wi divided by 0 wi. So that's a bit complicated formula, not that much. So 0 prime is just the derivative of 0, j0, and rho is again 1, 2, 3. So you see, I mean, they have this product measure here. They look independent, but they are actually not quite independent because they need to be ordered. So they live in a, so that creates some correlations between this case maximum, of course, because they need to be ordered. This measure is non-zero only if w1, w2, wk is ordered. Yeah, exactly. Yeah, exactly. So that's what I meant by say, so I look at k finite and I take n goes to infinity. Yeah, so another limit that you could study, and I think, OK, I think that case, at least I don't know any result, explicit result, which is, for instance, if you want to look at, so you look at k over the n, I don't know. You want to look at the middle of the spectrum, for instance, I mean the middle of the guys, right? That would be another limit. Two to what? OK, so yeah, in principle, from that formula, this formula contains anything that you want, and you can do some different type of scaling analysis. I did here the sort of, what I did here actually is just to look at the typical fluctuation somehow in the regime where k is fixed and n is large. But you can imagine a lot of different, I agree, a lot of different scaling regimes. What I expect is that probably most likely, if you look at slightly different regimes, you will lose certainly universality, but this does not mean that this is not uninteresting, but most likely these other regimes will be non-universal. Here, of course, there is something which is quite amazing, is that this does not depend on anything. But you're right, I mean, that's why I wanted to give you the formula if you want to play with others. In principle, this one is exact, and you can do all kinds of different scaling. Yeah, that's a good point. Well, because, OK, this is my intuition. I guess that, OK, you will sort of probe some kind of large deviation regimes, and this means that usually when you start to look at some large deviation regimes, you have universality. I mean, you begin to be more sensitive to the details of the initial parameters that distribution p of x. But that's not proof. Other questions? OK, so that's basically everything that, so that's what I wanted to tell you about the IID, I mean, the extreme statistics of IID. We covered quite a large subject at the end, and properties. And what I would like to do now, I think I still have like 15 minutes. So in the introduction yesterday, I sort of tried to show you, to show you or convince you that, OK, this IID case is very nice. I hope you could appreciate it. But nevertheless, most of the situations that you would encounter in statistical physics have to deal with correlations or correlated random variables. And so basically, one example that I will cover in the following is the case of random walks. And the reason why I want to cover random walks is the fact that this is a quite strongly correlated system. But maybe before doing this full analysis, I just still want to convince you that this IID case, that's true that I had this hypothesis, of course, that really I needed them to do all these computations. I really needed them to be IID, and that means really without any correlations between them. Now I want to show you some arguments, some physical arguments, to convince you that in fact these results are pretty robust and still if you have some weak correlations in your system, you can sort of, I want to convince you that these results can still hold. So before going to the harder case of strongly correlated systems, let's look at the case of weak correlations. And probably I will end up with that. The thing is all these results, all these nice results holds for IID, random variables, and the questions that I want to raise here and partially answer is what are the effects, what should I expect? What are the effects of correlations? So let's look at the case of weak correlations and let me present you some physicist's argument, which is essentially a kind of decimation argument, decimation or renormalization argument, which goes as follows. So let's consider the case, so to fix what do I mean by weak correlations? So I will consider the case where you have this, so I have in mind, for instance, that this Xi, so this index I represents some index in space, so that means that I have some value at site I, but might be a height, for instance. So here I would have X1, here I would have X2, and here I would have X3. It can be negative, I would have X4 and etc. So in general, you would have some Xi here. So what do I mean by weak correlations? So I have now in mind that I represent some site index. So by weak correlations, I mean, suppose that the correlations actually are decaying exponentially. So in other words, so if I compute these two-point correlations, connected correlations, we'd assume that they actually decay as exponential of minus i minus j. So what does it mean? Well, it means that essentially beyond the scale of order Xi, beyond the length scale of order Xi, you lose completely the correlations between your variables. So I will do some decimation or some renormalization in the real space, and I will just do the following, so I will have my variables here, so these are the i's. And so what I will do is basically to cut the system in blocks of size Xi. So I'm doing some real space blocking or renormalization, if you want. So I have these blocks of size Xi. Yeah, of course, I have X1, etc. X2, you understand? So what is the meaning of that? Well, roughly speaking, it means that this block is basically independent of that one, which is also independent of that one, etc. So these blocks are independent. So I can just sort of... Yes. So these blocks are independent. And essentially what it means is that... So these blocks are just independent. Yeah, sure. So for the site of the boundary, so you will have some surface effects, which will contribute some correlations. But of course these correlations are proportional to the surface, while you mean, I mean, the rest of the... Of course, there will be some intermediate, if you want, correlations here, but in the bulk, basically these correlations will be basically decorrelated. So let me just tell you what I mean by that. Well, first, I mean, the first thing is that the correlations between these guys just... So here, if I have... Imagine that you have a system in d dimensions. So typically these correlations will be proportional to l to the power d minus 1, because these are surface effects. So there will be some bleeding in that sense, compared to the bulk. Yeah, tell me. Well, on the surface of the oxide, that will be something... Let me see. So basically within each block, I will introduce... I will do the maximum within each block. So I will just... I have the block M1, but at the moment I don't do any... I just define the maximum within this block, so that max 1, this is basically max 2, and etc. So I just... In each block, if you want, I just take the largest value. But that's the standard, I mean, basically Pyre's argument, when you do that, the analysis of phase transitions. You just do this... Neglect these boundaries. Now here, again, I will take the maximum of this guy. So again, this is a heuristic argument. This is not a proof. Then I will give you some more precise results. So what I'm saying is that... So again, you divide your system. You divide your system in a certain number of blocks, and of oxide blocks, which is still much bigger than 1. It's much bigger than 1 because xi, of course, I mean, I'm saying that it has short-range correlation, so xi is much smaller than the size of the system. So xi is much smaller than n. Now the idea is basically to... I mean, the x-max that I am after, I can just, again, write it as the max of this maximum within the blocks. Max of p for p, ranging from 1 to n over xi. Now the argument goes as follows, is that essentially, and when n is large, what is happening is that basically you can neglect the correlations between these local maxima. So in other words, and that's the definition of xi, it just tells you that these max there are just iid random variables. And so in this case, you see by this blocking procedure, assuming, again, that you can somehow neglect the correlations at the boundaries that you are mentioning. If you assume that, then indeed you are back basically to these iid random variables. So that suggests that this iid class is actually quite robust. And in fact, there are several cases that you can, and you can work out. In fact, this case essentially is a case that you can work it out explicitly. I mean, basically this corresponds to a case of Einstein-Nudlensbeck process which you can solve exactly when you have exponential decay of correlations and you can indeed show that in the large n limit you will recover the result of iid random variables. So that suggests that it's not a proof, again, but you are back to the extreme statistics for iid, and there is in fact a much stronger theorem which is due to Bergman, which concerns the fact that suppose that the x-sides are just to show you how strong it is, suppose that the x-sides are a Gaussian stationary process. They constitute the x-sides Gaussian stationary process. That means that if you look at the distribution of the x-sides, they are just Gaussians. And on top of that, all the correlations are given a Gaussian measure, and it's stationary. That means that if you look at these correlations here, then this is only a function of i minus j. So that's what it means, stationary. So the correlations only depend on the distance. Now, I was considering the case where this decays as exponential, and I tried to argue by hand-waving arguments, I agree, that you end up with this iid one of variables. Now, the theorem actually tells you that if the correlations, if c of n decays basically much faster than one over one, basically, if... Let's do it this way. If log n, cn goes to 0 when n goes to infinity. So that means that if log n, if cn decays slightly faster than one over log n, then you are back to this iid case. OK. And the main reason is basically you are back to the, or basically xmax, as the maximum of iid one of variables. So that's actually this argument. Indeed, is actually extremely robust. I mean, not only the argument, but also the results. And that means that... So that means that all the results that we have derived there, I mean, OK, they are not completely useless. I mean, they will hold in a very wide class of situations. Yeah, is it readable? Yeah, and what I'm saying here is that, yeah, it's not very well written. C of n times log n goes to 0. So that means that basically if c of n decays faster than one over log n, so one over log n is something that decreases very, that decays very slowly. Then you are not. OK. So that's quite nice. But nevertheless, in some cases this is not enough. And in what we see tomorrow is that in some interesting cases, namely the case of random walks, obviously we are not in this situation. I already mentioned it. I mean, the correlation there is growing, are growing. And so obviously this kind of argument cannot hold. And we will see that we have to design a new theory, new methods. And you will see that for random walks, it turns out that the questions and new statistics are quite intimately related to first passage properties of random walk. And that's something that I will discuss next time. OK. Thank you for your attention. Or maybe there are questions. Sorry. Yeah, sure. Yeah. This one? This one, OK. So I just, I defined very, I mean, OK, briefly what the Gaussian station, at least not defined, but do you know what the Gaussian stationary process is? OK. So this is the correlation matrix c of i minus j. So now the idea is that so basically with this argument that I had here, I mean, I was considering the case of exponential decay because I had in mind, for instance, the high temperature phase of rising model or whatever where you have an exponential decay. Now, it turns out that if the correlation c of n decays faster than 1 over log n, OK, so that's what it means, right? If log n times c n goes to 0, that means that's log n times c n, yes. OK. Is it OK? So that means that if c of n decays like 1 over log n basically or slightly faster than 1 over log n, then basically the statistics of x max in the large n limit, of course, will be given by di i d. OK. This is, of course, for n goes to infinity. So this case, of course, I mean, is quite, quite hard to show. But there are actually some cases. I mean, for instance, if you want to convince you that this case holds, I mean, I could give you some references where you can do it for the unshine run back process, for instance, which has this exponential decay, and where you can work out explicitly the distribution and show that, indeed, it converges to this gambolo. OK. Now the question.