 Okay, thank you very much. Actually, listening to today's talk, I decided to change a little bit the presentation. So it will be more an introduction, but I will mention several things which already mentioned today, the sign process and some connection with probability. So it will be, I think, very good for that. Okay, so but the purpose of what I want to say is just to tell you that about topless determinants and why they're interesting. Well, at least I don't know if you get interested or not, but at least I will try to play with this. So, well, so let us first consider some function which is integrable. I'll see the unit circle, call it the unit circle. So then we can define Fourier coefficients of this function. And using the Fourier coefficient way, we can define what is known as a Turplitz matrix. So n times n, Turplitz matrix, or Tn, with f is called the symbol of the Turplitz matrix. So this is a matrix built in the following way. Take the Fourier coefficients and along each diagonal it is the same Fourier coefficient. So there's a constant along diagonal. So the main diagonal is f zero, the next f one, the lowest f minus one. And here we have it from zero to n minus one, the indices. So in the corresponding determinant it's called the Turplitz determinant. So, and we have the following question. What happens with dn as n tends to infinity? So this is the main question. And now I will tell you what kind of applications it might have. So the first application, well, examples, maybe examples. It will be mostly, there will be some, some a few general results and some examples. So examples, sample of application. On the first case, we will consider the following symbol. This is a unit circle and suppose our f will be given the following way. So f is one, f depends on some parameter alpha. So f is one on this arc, alpha. And on the complement arc, the f is zero. So it is just one here. And now the question is, yes. So now let us look at the corresponding Turplitz determinant, right? So first of all, let's look at the Fourier coefficient. It's very easy. So then we have simply from alpha to pi minus alpha. And this is just this. This is just like this. Okay, so this way we can easily compute. So we'll have minus two i pi k, right? i k alpha minus, minus i k alpha, right? Okay, so what is this? This is a sign. So we have a sign alpha k with a minus sign here. Divided by pi k, right? As far as I can see. So this is done for, this is if k is not zero, right? So k is zero, you can do it yourself and see what happens. So this is already something which reminds you, right? So something, it looks like the same kernel that was considered before. And indeed, so from here it's very easy to derive the following result. So it's just take the limit of the turbulence determinant with this symbol. But now, instead of alpha, I will put a variant symbol. So it will put constant divided by n. So put two s over n. So s is fixed and n is the same n as here. So this is a so-called double scaling limit because m appears in two places. So from here, you will already, it's not so difficult to see that this will, you will obtain a threat on determinant with the kernel which is a sign kernel. So, okay, you can see that it acts on the interval minus one, one. The k sign, the kernel from x and y is the following sign. s times x minus y divided by pi x minus y. Right, so that is the famous sign kernel. Okay, so it is a simple exercise and an operator theory maybe. So then, let us call it simply PS. So then, this object can be interpreted as probability, probability of a gap. And so if you look at this as a sign process, you have the particles. This is a probability that there are no particles in the interval minus s over pi, s over pi. Well, with the sign kernel process. Also, equivalently, it is a gap probability for various ensemble of random matrices. So most typical is Gaussian-Meter ensemble. Okay, so this is the interpretation of this determinant as a gap probability. So what can be, so what is the application? So far, it is just observation. This is just observation. So how can we use it? Here we have the following question. What is the large behavior of this probability? What is the behavior of this probability when s goes to infinity? What is the probability of this gap for largest? So that was conjectured. The formula was conjectured by Meta already. Then there was the, and the key in the derivation is this property. So I don't know, this property one, or rather, okay, the asymptotics of Tupler's determinants, taint, there is a good method to obtain it, taint by Ribbon-Hilbert methods. Is the key, so the answer, yes. And then, namely, we can obtain the, so we'll see that one obtains after a lengthy calculation that actually the logarithm of this determinant, to s over n, is given by the following formula. So n squared, logarithm of cosine, s over n minus one fourth, I think, logarithm n times sine, s over n, plus some constant c zero, and something small. This is true for n going to infinity, and s, okay, larger than the sum of zero and less than n, actually. This is, ah, no, this is actually the following error term is important, it is the following one divided by n sine s over n, that's the formula. C zero is a constant, actually. So C zero has an explicit form, one-twelfth log two plus three, zeta function derivative of zeta function at the point minus one, yes. So from here, in this formula, you can fix this and take limit n to infinity. And then, we obtain that p s is equal. So here, you will have a square divided by two minus one-fourth log s, plus c zero, plus the error term one over s. And this is the answer to this question. So in fact, to tell you the truth, everything except for this constant you can do without topless determinant, but you need it to get a constant and this is a kind of a good method as well. Okay, so about 15 minutes, right? So the next step. So that's it for this application. You have questions you have to ask because then it will be lost, I will delete it. Ah, yeah, logarithm, yeah, that's true. Well, that's a good observation. Any more mistakes found? Yeah, so you notice that it decays very rapidly. So e to the, it's a Gaussian decay, but there is some pre-factor. Okay, so this is the application of the situation when you have a symbol on the arc of the circle. So it is zero here. So for the case where you have a symbol on the whole circle and without zeros, there is much more general statement. Well, here I could generalize it somewhat by considering some other, here some other things, but in fact, there will be some limitations. So I think I don't need this. But of course, avoid mentioning the most standard result, which is a strong Sega limit theorem. Same thing, the second and second. So it is the following. So we're gonna consider function, integrable, but such that the logarithm of this function is, well, I will make it a little bit less general, let's just smooth in its circle. So of course, it cannot have a zero now, it cannot have a zero or infinity. It is rather a good function without zeros or singularities. So then the statement is the following, that yes, so I will call this logarithm V of C. So then the end of this function, again, write it for the logarithm. So it will be N times V zero. So the zeros for a coefficient of V now, so V, plus a constant of the following form, K, VK, V minus K, plus something small, I don't care for N is large. The smallness, of course depends how smooth the function is. It is very, if it is analytic, then you'll have the exponential decay. Okay, so this is a statement of the Strong-Segger limit theorem. Because it was proved by Sege, it was proven in two steps. First, it was proven in 1905, only this term, and it was called Sege-Segger. Then 50 years, actually, literally 50 years later, he proves the second term. And then it's, now it's called Strong-Segger because it has both terms, so far. Right, so, and he proved it at least this time because it was discovered by Sege that the same describes correlations for the easing model, and two-dimensional easing model. So, and that's why he decided to do this. But here, I don't mention easing model, well, I'll mention it, but I'll assume anything more. Rather, there is an probabilistic interpretation, which I really have to mention. So, probabilistic interpretation of that is the following. So, first of all, first of all, we would note the following property of Tupper's determinants. It's actually, it can be written also as a multiple integral, following form, n times integral, n, n integrals here. So, here is a square of the undermond determinant, is okay. And here, you have the product, okay. So, if you look at that, you will recognize that it can be interpreted as an expectation of linear statistics. So, this thing is an expectation with respect to the circular unit random sample of the following thing. So, this is e to the trace of, trace of g, trace v of g. Okay, so let us suppose, okay, let's not suppose. So, so c u e is, u e is just unitary and dimensional unitary group with harm measure. So, when you take this expectation, you integrate out so-called angle variables and this is, this will be the result in just eigenvalues, integral of eigenvalues. So, here you have e to the power of trace of v, actually. And that's why it is, it's sort of, if you know about this, it's almost obvious. So, in fact, so this observation allows you also to, to do this, to note that there is, this theorem has a probabilistic interpretation and this has the full end. So that, from Segel limit theorem. So, in here, we assume that v zero is zero. So, there is no zero Fourier coefficient of v and v is simply real valued. So, then we can say the following. So, we can say that expectation of this c u e. So, I'm just writing a characteristic function of the random variable with a trace g, trace v of g. So, I'll say it, trace v of g. I'm just writing the characteristic function of this, right, of this variable. But, I know the result because this is exactly the theorem, right, this is exactly. The characteristic functions are actually templates determined, so you can also think of it this way. So, this is then. So, v zero is zero, I assumed like that. And here, v should be replaced by i t v, right? i t times v. So, then you have minus t squared. Then, you divide by two, multiply by two. And here, you have the sum, key. And here, you should call it, if you call it sigma squared, one plus something small. So, it means that this random variable trace v of g converges in distribution to the normal random variable centered at zero variance. So, this is what I wanted to say about Sega theorem and maybe last thing I will say, just to finish. So, in fact, there are also interesting cases where you have a symbol which has singularity. So, if a symbol has singularities, then the thing may diverge. You can have a zero in the circle, for example. So, I just wanted to write you the formula in this case. In the simplest case, where the symbol f has the most component as before, but now it can have zeros, or poles maybe, or not poles, but some infinities, or maybe oscillations, if alpha is imaginary. So, at 10 points on the circle, you have this, it's called zeros, let's call them zeros, positive f l. So, then how do you modify the formula? So, how do you modify the formula? So, this will be the last formula that I will write. And then you can do whatever you want with it. So, this is just to show you, so what happens? If you get there. Singularity, these are so-called, it's a particular case of Fischer-Hartwig singularities. So, then, this case. So, again, so first of all, the beginning will be the same as before, but then now it is for this part. So, what will be, there will be a logarithmic factor here as well, so there will be a, so logarithmic addition here. So, there will be the following, the sum alpha j squared from one to m times log. Yeah, so, this is a new feature, the singularity leads to this time of behavior. And then there will be a rather complicated constant. So, first of all, there will be the following constant. So, the sum, the logarithms. So, you, here we'll have a so-called g function. So, Barnes g function. g function has the following property. And this is a gamma function, this is a standard gamma function. Okay, so this is a constant, but there will be also terms, considering which will describe interaction between the singularity and the smooth spot. So, this interaction, some sorts. And there will be a term which describes interaction between the singularities themselves. But this will have the following form, a minus dk, yes. And that is it. So, this is what happens if you include singularities. You can also include, so this is only a particular case. I'm not going to say too much, but it is a particular case of singularities. And it, you can maybe guess that there will be some logarithmic correction over there because this diverges logarithmically for those singularities. Yes, but this is, of course, impossible to guess the precise term of that, but it is a generalization of factorials. It's actually the products of factorials, in some sense. What is what? Ah, z, that is a good question. It is a point, this is a point on a circle. Or what? Or do I understand? Was it your question, like that? So, these are the points on the circle where a symbol has singularities. Well, it should be on the circle, if not, then it doesn't have singularities as far as I'm concerned. These are points on the circle. Yes, and finally, the final word, to explain the title, so I was initially planning to say what happens when the singularities are moving around and merging together. But that is more special subject. This is, I think, is much better for me. Okay, thanks a lot.