 And it's a really great conference with a lot of nice talks. I will speak about the rigidity for five endpoint processes. And this is giant work with Alexander Bufetov and Jan Shishu. So first of all, Sasha told us already what the rigidity is, but it was the very beginning of the conference. So let me remind you how we'll speak about the point processes on R. And the corresponding Borel sigma algebra is generated by the set of hash functions where A is bounded Borel. And I will consider the corresponding restriction. I take some set C. And my subalgebra will be generated by hashes where I is inside C. Let's continue. So my definition is as follows. I take some B bounded. And if for any such C, corresponding hash function is F R minus B E. This is the corresponding completion measurable. E is rigid. In other words, here is my R. I take my set B. I take some configuration. I fix the outside. And this is exactly the picture that we've seen at the end of Sasha's lecture today. What he wanted to tell us is, can we see a lot about the corresponding conditioning inside? And my question is much more simple. OK, I fixed the configuration outside. Do I have some invariance? And it turns out that in the case of determinantal processes and when we try to generalize slightly, the processes I usually have only one invariant. And this is the number of points here. Hence the definition of the rigid. Perfect. Now it seems that we've mentioned during the conference several times the orthogonal processes. But if I'm not mistaken, we didn't mention the preference. So let me recall what do we have here. So we've seen several times that if I start, say, with GUI, then I consider the corresponding distribution, which is just the product of the square of the wonderment times the product of the weights. Gaussian weights for GUI. And we know that this is written just, can be written just as the determinant of the, when we have the corresponding Christoffel Darbou kernel for the orthogonal polynomials with this particular weight. Now, if I want to move forward, I can consider GUI, Gaussian orthogonal ensemble, GSE, Gaussian symplectic ensemble. And in this case, what I have here will be modulus times the weight. And in the symplectic case, here we have real symmetric matrices. And here we have quaternionic matrices. I will have the product of the fourth powers. It turns out that we can rewrite it. Not as a determinant, but as a faffin. I will recall what is a faffin in several minutes. Of some kernel, let me put for a moment one here. So this is the first here. This is a better ensemble with better equal 1. And here we have better ensemble with better equal 4. So I will put one here with some xy. And this will be k4 with some xy. And as in the determinant case, I have this formula for the corresponding distribution. But if I have a projection or a kernel of the projection, then I can integrate out my variables one after another. And what I actually get is that not only here I have the determinant of faffin in my case, but for all correlation functions, I will have a similar formula. So it will be kx lambda i lambda j i k from 1 to k. And so we can generalize it and get the definition if there exists such a kernel that I can write all my correlation functions in such a form. Yes, I will tell it just in a second. So what is faffin? I take a skew symmetric matrix. A transposed is equal to minus a. We can some kind of normal form, for example, this one. Then we definitely have that the determinant of a is equal to the determinant squared of b. And as I can get this by elementary transformation, in fact, what I have that my determinant as a polynomial with respect to the elements of matrix is a square of some other polynomial. And this is a definition. I should tell what my sign is. And the usual convention is that the faffin of j is 1, where j is matrix of this kind. And you can see that if the size of matrix is odd, then I have zeros on both sizes here. This is not an interesting case, and so we'll consider only the case when the sizes of matrices are even. We have the definition of the faffin point process, and we can move to the results. There are several results on the rigidity for the determinant point processes. The first result on the sign point process is due to Gosh how Sasha told us already. And there is some general proposition due now to Sasha. And our first interest are the corresponding universal processes, signs, Bessel area. And they are covered by this proposition, as well as some other. If we are moving from the determinant case to the faffin case, then we'll have several difficulties, and as we will see just soon. And so at the moment, we don't have some general proposition at our disposal. And what we can do, we can prove rigidity just case by case. And so our results of first theorem is that we distinguish these two limiting case. We have sign 1 faffin point process and sign 4 point process. These processes are rigid. And our second result is if we can see the corresponding Bessel processes, Bessel 1 point process and Bessel 4 point processes, and then these processes are also rigid. How one can prove rigidity of a point process? Main tool is the following proposition. It's due to Gouche in Paris, as well as the definition of the rigidity. I take any B bounded. And if for any epsilon, I can construct a function f that depends on B and epsilon, of course, such that my f restricted to B is identically 1. And the variance of the corresponding linear statistics with respect to the point process is less than epsilon. Then P is, I recall that the linear statistics is just the sum of the values over my configuration. Yes, yes, I wanted to come to this. So in fact, if we speak about sign process, so our main tool is this proposition. And sign processes can be done using this tool. But nevertheless, sign processes are translationally invariant. And in this case, we also have another nice tool to deal with the rigidity. I'll try to come back to this tool at the end of the lecture. And also for the sign process, we have a more general result due to Chaby and Snadl. It says that, in fact, sign better processes are rigid. OK, if I want to use this proposition, I need some nice formula for the variation. What variation of the SF? It's definitely the expectation of the square minus the expectation. And you see that here we can just use the definition of the first correlation function. And here it's close to the definition of the second correlation function. I want to rewrite it in some peculiar way. And to be sure, let me just rewrite it. So what we actually get is the integral of f squared plus the integral of the second truncated correlation function and minus 1 1⁄2 integral squared times the hated correlation function again. Why do I write it in this particular way? Because it turns out that this term is usually zero. Let me write it as a separate truncated. And this property is called the book of Forrester on log gases. Let me say, just for sure, that the second correlation function truncated is the second correlation function minus the product of the first ones. Here we have the second correlation function if the particles doesn't fill each other. And so now we see how far we're from the independent case. To use it now, we need the formulas in our cases. And first of all, let me remind what is the, let me write it maybe here, what is the second truncated correlation function if I am in the determinant situation. I should write the determinant. And what I get is actually minus k x y k y x. So what is written here? My star is just the statement that integral of k x y k y x d y is equal to k x. That is this perfect screening in the determinant case is just equivalent to the statement that we have a kernel of a projection. And we remember that sine, ary, basso, and we have a lot of examples when we have even an orthogonal projection. What do we have in the Fethian case? And I will write it here. So what about the Fethian? I will probably just write you the answer. My second truncated correlation function is a very nice form. It's just minus determinant of my, I forgot to tell you that if I am in the Fethian point process, I need an even matrix. And the usual way to do it is I just say that I have a matrix kernel. My corresponding kernel is just some 2 by 2 matrix, m 11 by m. So on I have 2 by 2 matrix. It's definitely we can take a determinant of this 2 by 2 matrix. And what we get is the second correlation function. Now the perfect screening in the Fethian case. Now just a very short digression in the determinant case. We have a general theorem of existence, Maki-Soshnikov theorem that we heard probably yesterday. And in the Fethian case, there is no such theorem. And in fact, it seems that there will be no such theorem. And we have either some explicit construction of a Fethian process. Or we can obtain one by going to the corresponding limit. Now we're speaking about the limits. And it turns out that if we start with this orthogonal or symplectic ensembles, then we have some particular form of the limiting kernel. I will write it in case kx will fall because it's a bit simpler. Let me write it like this. Say it will be m xy, m yx. This is the derivative, y, and this is the integral from y to x of m xp. And a short proposition, if my m, if this element, is a projection itself, plus some additional constraints that's always hold, then the whole kernel is also the kernel of a projection. How do we see it? We multiply rho by column. First, we obtain an element like this. And this is just the standard reproducing property. Or we multiply derivative by an integral. We integrate by parts. We always have in our situation some. We have no additional term because everything is 0 at the infinity, and thus this is sufficient. The first and the easiest example is the symplectic sine process. Here we just have minus y over minus y. And here we have the derivative. This is by definition my sine for xy kernel. And we see that in this case indeed what we have here is just the standard sine kernel. So it is a projection. So we have that the corresponding operator here is a projection, and it easily follows that we do have this perfect screening property. And coming back here, we have 0 here. And what we actually have is, maybe I write it also, this blackboard. In this case, my variation is just minus, no minus. OK, I have a nice formula at least for this particular case. Now I want to construct. Let me recall that I have a proposition, and I need to construct a function with small variance. Obviously, it's sufficient to do it just for the increasing set of intervals. So I'll take an interval from minus r to r. I say that my function is 1 here. I will say that it is 0 if x is sufficiently large, or sufficiently small. And it turns out that logarithmic decay is sufficient. Namely, let me write phi of x. So what do I have? It is 1 where I'm on this x. I did that logarithm. Yes, it seems that everything is correct. First of all, let us briefly see what do we have in the determinant case. What I actually have is the double logarithm. Let me put 1 over n plus 1 squared here. OK. In fact, I consider the case when x and y are greater than 0. It turns out that I actually have several domains of integration. And the only interesting part is, as I have different squared, is just the part where I have non-zero term either here or there. It is this one. So what do I have? The difference of two functions of such kind. That is x squared times. Now we can get a simple estimate that what we actually get is just constant over the square of the logarithm. I will rewrite the first part as just. And I will forget about the sign. And let me rewrite this part just as the double logarithm, dy over y. Here I have lambda minus 1 squared lambda. This is from r to t. And I forgot about the square of the logarithm. And lambda is just x over y. So I have one logarithm here, but nevertheless, I have the square of the logarithm and the denominator. And this part is convergent. So tends to 0 when t tends to infinity. I just got some nice estimate of my kernel. And that's it. And it turns out that we can generalize this argument that it works for most of the determinant kernels that we're interested in. Now, let us look at the fifth n case. What do we have here? First of all, we're interested to take the determinant of this two-by-two matrix and put it inside the same integral. What do we get? The square of the sign is just the same. This integral, we can write just as the difference of a constant minus some integral of minus x to infinity. This is of the integral. This is of the size matrix bounded. And what we get after the differentiation, we will have a sine over x minus y squared. We can forget about it. Everything is also convergent here. And cosine over x minus y. What I want to tell you, that if we're interested in the variance of my s phi with respect to p sine 4, then the only part I'm interested in is of the following form. And once again, it's constant over logarithm squared of t. What can I do in this particular case? I rewrite it as n. You can easily see that I can integrate out cosine from this formula. And I obtain once again an estimate of the form. But let us look carefully at what has just happened. OK, now we're happy because we have the kernel of such form. We can just integrate it out. And because of this rapid oscillations, we have an additional term in the denominator, and everything is OK. But if you think about any more general kernel, there will be no such a tool. You will have some oscillations, and you shall deal with them somehow to obtain a result of similar kind. And if we now pass to the Bessel case, OK, let me write you. So this will be, say, m Bessel for x, y, and what we have. All other places, my m Bessel. It is just how to write. Let me write k Bessel to xy. So I have the standard determinant of Bessel kernel here, as in the case of the sine function, plus some additional term. I don't want to write it explicitly. Let me write it like m1 of x, m of y. And it means, first of all, well, it doesn't mean. But we can suspect that something happens if my m xy is a projection. Then my k is also a projection. If I have some additional term, it's not that obvious how we can get a projection in a result. And actually, it doesn't. And it's not a miracle. Well, I have a sequence of finite dimensional projections. I'm interested in the limit. And well, what I get in the limit is not a projection. It happens sometime. But nevertheless, what we get as a result, my perfect screening in the Bessel case, it doesn't work. It just doesn't happen, so to say. And in fact, most of these formulas are written in Professor Forrester's book on log gases, initial formulas for the Phef and kernels. And if we combine several passages from several parts of the book, actually, it is written that it should hold. And we expected that it is, but in fact, it doesn't. So we needed some additional step to deal with the first integral, if you remember it. And it turns out that we have a weaker integral version that is just the corresponding integral is 0 in our case. And it is sufficient to deal with the first term. But moreover, let us come back for a second. We have just one summand here that was difficult to deal with. If we have an additional term here, and then we have integration, differentiation, and everything, we have a lot of terms we should deal with one by one or in some more clever way. We have a number of propositions how we can estimate such integrals. But nevertheless, it is a long paper. And so at the moment, only sine and Bessel cases are done. I have, say, five more minutes, right? Of course, even six more minutes, perfect. In fact, if we consider sine process, which is translation invariant with more or less simple formulas, then we can deal with it in several different ways. And it turns out that there is a nice idea to pass to the corresponding Fourier transform. So we can consider the OK. So if something is translation invariant, definitely we have that the first correlation function is just equal to some rho. The second correlation function is the right is just C. And now we can consider the second truncated correlation function once again. We'll be just now we take the Fourier transform, my second correlation function. And it turns out that we have the sufficient criteria for the translation invariant processes. Namely, if we subtract my rho, sorry, we should have my rho. The result should be positive. And how to write? Let me rewrite it this. I put lambda. What I want to say that actually we have only one parameter here. Let me say puts, I don't know, z here, z here. And so only one parameter here. And there should be some linear bound. There exists C such that I have such bounds. And it means that the corresponding translation invariant process is rigid.