 First of all, I'd like to thank the organizers for the invitation. And today we'll speak about my joint work with Sasha Boffetov, which is in progress. In January, it was devoted to some dynamical properties of determinational random point processes. And more precisely, we'll speak about the behavior of trajectories of the sine process. And despite that, Yan Shih has already spoke about determinational processes. Let me, however, recall some basic definitions. But I'll do this in a very informal way. Okay, sorry. So, definitions. And they will not be regrouped. Determinational random point processes is a special class of random point processes. So, let me first define the random point processes in general. To this end, let me take some phase space. Let it be just the real line. If you want, you can consider Rn, but it will be just the same. For simplicity, I take determinational random point processes on the real line. And let us just take some particles and let drop them randomly at this line. In such a way that, first, each throw, the number of particles, by this sign, I will denote the number of particles, is less or equal than countable. And second, for any borrowed set of the real line, the number of particles, which hits this set, is finite. So, you have some bounded set. Here you can have only finite number of particles. And this construction is called the random point process. In order to define it rigorously, you need to take the space. Space is the space of locally finite configurations on R. You need to take some sigma algebra. Sigma algebra is just a sigma algebra generated by random variables of this type. And then you just take some probability measure on the space and the sigma algebra, and the triple of the space, the sigma algebra, and the measure is called random point process. Let us assume that there exists the correlation functions. The correlation functions is a family of functions raw n, where n is more than or equal to 1. The function raw n is a function from Rn. So, the function of n variables to R+. Such that for any points x1, xn from R, which are disjoint, we have the following simple relation. So, the probability that in infinitely small neighborhoods of each of points x1 and so on, xn, you have exactly one particle. Exactly one particle equals to raw n of x1 and so on, xn dx1 dxn. So, somehow the correlation function can be seen as a probability density, if you are interested in only fixed number of particles and you aren't interested in the other particles. And usually in applications such functions exist. Let me mention that maybe the simplest example of random point process. This is a Poisson process. This is a random point process because you can see the points of increase of this process as particles, which you drop on the real line. And then the correlation functions, they are just constant and they equal to lambda to the power n, where lambda is a parameter of the Poissonian process. Okay, now definition. A random point process is called determinantial, if there exists the function k of two variables, which is called the correlation kernel or just the kernel, such that we have that the correlation functions have the following determinantial form. So, it's a determinant of the following matrix k of x1 x1 and so on k of x1 xn, k of xn x1 and so on, k of xn xn. And it should be true for any n. So, from the first point of view, the determinantial process is something extremely special, because you take the correlation functions of some very special form, but it turns out that they appear naturally in many areas of mathematical physics and mathematics. In particular, they play the crucial role in the random matrix theory, they play an important role in the representation theory in statistical mechanics and in quantum mechanics also. And one of the most known representatives of determinantial processes is the sine process, which is given by the following correlation, which has the following simple form. So, k of xy equals to sine of x minus y over pi x minus y. And Yanxi have already defined, in fact, the sine kernel, but here he had something like p times alpha, it's just a question of normalization, it's basically the same object. Okay, unfortunately I have no time to explain you, to give you detailed examples where the sine process arises, but let me, however, say a couple of words about this. Let us consider a large random matrix with independent entires, which are Gaussian, and in such a way that the matrix is Hermitian. And then if you calculate the eigenvalues of this matrix, since the matrix is Hermitian, they are real, and then you study what happens when the size of the matrix goes to infinity, it turns out that the eigenvalues under some proper normalization are governed exactly by the sine process. And moreover, so it means that the particles, which drop on the real line under sine process, can be considered just as the eigenvalues of such an infinite random matrix under proper normalization. Moreover, it turns out that this holds not only for such a random matrix that I described, but it holds for a large class of random matrices, and you always get the sine process like a universal limit. In particular, it's interesting because it was developed by physicists because such eigenvalues of such random matrices described well behavior of energy levels in some quantum systems. Ок, так, я надеюсь, что вы поверите мне, что рандомные процессы, и, в частности, сайл процессов, не нужны. Так что, давайте вернусь к тому, что я говорю, почти к тому, что я говорю сейчас, в центре лимитного теория. Давайте рассмотрим сайл процесса. Во-первых, я говорю только о сайл процессе. Здесь у нас реальная линия, и т.д. И давайте рассмотрим время интервала, не время интервала, но просто интервала 0, n. И давайте найдем число т.д., в котором идет интервала 0, n. И давайте рассмотрим, что происходит, когда n идет к инфинити. Что это behavior of this random variable? Конечно, это increases, но вопрос это как? И ответ к вопросу, который был длина к феймерскому физическому костюмам, и лебевцам 95. Они показали, что это behavior является оборудованием центра лимитного теория. Просто в обычном состоянии, в центре лимитного теории. Больше, как они предназначены random variable psi n, which equals to the number of particles hitting the interval 0, n, minus its expectation, just as usual in the central limit theorem. And over the square root of the variation of this number of particles. And then it turns out that this random variable in distribution converges weekly when n goes to infinity to a standard Gaussian random variable. psi is standard Gaussian. However, the situation here turns out to be, in fact, very unusual because of the following. If you calculate the variation of the number of particles hitting the interval 0, n, you will find that it behaves as a logarithm of n, as n goes to infinity. So the variation grows really slowly because usually in the central limit theorem you have a linear growth of the variation. And if you calculate the expectation of this number of particles, you will get just a usual linear growth as in classical central limit theorem. So it just equals to n. Yeah. n over... Yeah, I have pi here. I think that's okay. My intensity... Wait, wait, wait. Ah, yes, sorry, sorry, sorry. Yes, I think that... Yes, sorry. Should be correct. Okay. And it's well-known that for classical systems where the central limit theorem holds, we also have the functional central limit theorem or the Don's-Kirch invariance principle, what is the same? Roughly speaking, it says the following if a random variable is governed at infinity by the central limit theorem, and its trajectories are governed by the Brownian motion. And however, for determinational processes nothing is known about behavior of the trajectories and even for the sine process. So the question we pose is actually can we prove something as a functional central limit theorem if we have the Brownian motion under the limit and if not, then what do we have? A standard way to speak about the functional central limit theorem is the following. We need to consider the following random process. So we need to take the random variable from the central limit theorem and to insert the time here. So t is time now. By the following way, so I put the time to the numerator and I do not touch the denominator. However, here I will replace this variance by the one over square root, sorry, one over p squared logarithm of n because it's just asymptotically the same. And I consider time interval from 0 to 1. And look, if I study this process in the space of trajectories when time goes from 0 to 1 I will cover exactly all my interval 0 and so indeed I study this random variable in the space of trajectories. So the question we pose is what is the behavior of the distribution of this process as n goes to infinity and here I put dot. I mean that I consider the trajectories. I walk in the space of trajectories, not for fixed time. And in the classical setting you have just here the convergence to the Brownian motion but it turns out that in our situation nothing as Brownian motion appears we find something which resembles more the white noise and let me state our main result which describes this behavior. In the denominator no, I do not put because I mean it doesn't make sense. Look, if I put t here then what happens? Then I can say that say tn will be m. I will get just the central limit from the point is that when t changed from 0 to n I study exactly the trajectories. Ah, fclt, sorry, functional central limit theorem. Ah, excuse me, 0, tn. Sorry, 0, tn. So it's basically the same that here but in certain time there. Okay, so our main result is the following theorem which states that there exists random variable eta n and which is real and a random continuous random process that tn such that such that the following holds first the integral of our process from 0 to t. So psi sn ds equals to t times eta n plus ztn over the square root of the logarithm of n. Ah, it holds for for for all times. Ah, so till the moment I say the following, I say that the integral of our process I study not the process itself I study its integral. Why? Because it turns out that the process is very regular, it behaves ah, it isn't I mean it resembles more the white noise and the Brownian motion so standard way to treat the white noise just to take time integral of it and you'll get the Brownian motion. So it's reasonable here to take the the time integral. And we state that it grows just linearly in time ah, and then you have also some reminder and about these processes this random variable eta n and the process ztn we can say the following that their joint distribution converts weekly and n goes to infinity to the joint distribution of a random variable eta and a process zt ah, in the space of ah real numbers which is responsible to eta and times the space of continuous functions which is responsible to our process zt ah, where where eta is just centered Gaussian random variable with the variance one half and that t is a centered Gaussian process with the following covariance since this is centered, the covariance is just expectation of the product the covariance is rather awful but I mean the precise formula is not too important, what is important is that we can find it explicitly minus one theta of t I will say in a moment what is theta plus t minus one theta of s minus t theta of s minus one minus s theta of t minus one where theta of t is just t squared logarithm of the absolute value of t and moreover the random variable eta and the process that t are independent and the third the third of our theorem said something about the conversions the conversions of eta n to eta and more precisely it says that the rate of conversions d of eta n to the distribution of eta is less or equal than one over square root of the logarithm of n so this is our result and now let me let me try to explain what does it mean to this end I will draw some picture so here we have time and here we have our process I mean the integral from 0 to t of psi s and ds and we have the real line t times eta where eta is just random Gaussian random variable with these parameters and we know that our process is governed by another line another straight line t times eta n which converges in distribution to this straight line with the rate one over square root of the logarithm of n so in some sense this distance is for the one over square root of logarithm of n and then we have also this reminder which is of the same order that this deviation but about the reminder we know up to higher order terms everything so we know that this is just a centered Gaussian process with the given covariance so here we have some Gaussian oscillations around this straight line so this is basically the behavior of our our process which is something different and now if you do not have questions till the moment I will turn to first I would like to explain how one can arrive to such a statement how I can understand that you need to consider the time integral of our process and not the process itself and so on and then I will pass to the proof of the theorem this is an explanation ah ok basically it means the following it means that the particles under the sign process they are distributed on your real line in a very regular way because you have some repulsion of particles under the sign process and that's why the variation can grow only very slowly but I mean the regular explanation is some calculation ah ok I mean it uses the following basically you just calculate the variation by the definition you have some L2 products I mean you can represent your integrals as L2 products and you take you use the partial identity and they use that Fourier transform of this function if you consider it as a function of one variable x-y this is just the indicator function interval 0, 1 yeah and basically that's it ok maybe I'll explain it after the talk because it's really some rather tedious calculation ok other examples of the dimensional process yeah it isn't the science about only one process in a similar way you can also get also get so called I mean in a similar way if you consider limit of random matrices when the size goes to infinity if you choose some different normalization you also get some different processes so called A-re process, B-cell process but let me not enter into details because I mean the kernels have much more complicated forms unfortunately I have no time to explain it but it isn't a science about only one process I mean there is some class of interesting processes if I take just arbitrary k but all this I mean here what do I really need I need the logarithmic growth the A-re process the B-cell process but for also for the part of my results which I will mention later I need I need exactly that the kernel have such form but I think that probably it's technically for the moment I can do this only for the same process but probably for the B-cell and A-re it's also possible but for the moment you have the result only for the same process ok, so now let me try to explain why how one can arrive to such statement of the theorem and to this end let us forget for the moment about this theorem let us just remember that we have such a random process and we want to study the behavior of its trajectories when n goes to infinity the first thing that we should do we want to understand what can be the limiting process what can be I will say what can be the limit to this end we are in a standard way and what you need to do just to find the behavior of finite dimensional distributions of your process we have the following which states that for any times t1 which is more than or equal to 0 and so on td for some natural number d which is less or equal than 1 we have that the finite dimensional distributions of our our process converge converge when n goes to infinity weakly to the following random vector xi t1 and so on td where the vector xi t1 xi td is a centered Gaussian the centered Gaussian with the following covariance expectation of xi ti xi tj equals to equals to one half if t is equal to s and 0 if t is different from s so you see the limiting covariance is independent from the choice of our times so in particular it means if you take one time very close to another then nothing changes it means that under the limit we have a continuous process in principle so nothing is brought in motion it's possible I will prove this lemma let me however say just a couple of words about this to prove the converges itself it's rather straightforward generalization of the Causton-Lebovitz central limit theorem for the multidimensional setting and to see that the variance has such form it's a rather simple computation you need to just to calculate the variance before the limit and then take the limit and here you crucially use that the variance has the logarithmic growth and in this proposition I do not use that I consider the sign protest I use only the logarithmic growth of the variation what is remarkable in theory you need only that the variation just grows to infinity and in our setting we indeed need it so the behavior of trajectories really depends on the logarithmic growth of the variation well, now let us try to understand better what is this limiting distribution to the send I note that it can be represented as a distribution of the sum of two random vectors the first of them has the same the same random variable at each component and the second one is gamma t1 gamma td and here the random variable eta gamma t1 gamma td normal distribution with parameter 0.5 and there mutually mutually independent so it means the following this is just a simple computation because if you take the sum of two Gaussian random variable which are independent you should just take the sum of their variations and then you get that the distributions coincide well it means that if the limiting process exists then we expect we expect that it should be just the process eta which is constant at time, which is independent on time plus some process gamma t gamma t where gamma t has such finite dimensional distributions yeah, I mean this is some formal argument which give us the intuition so that's why I put this sign here in the convergence and here gamma t is a centered Gaussian process with the covariances gamma t gamma s equal to with the same covariances not the same but excuse me I made a mistake this isn't correct here here I have the covariances in this case I have one and here I have one half yeah so something more unusual that I wrote before so here you have one if the times coincide yeah and here you have the itj and here you have one half and there this is indeed one half if t equals to s and 0 if t is different from s so what does it mean it means that if the limit exists then it is governed by some very regular process which is just constant some process which is completely decorated in time so some process which resembles the white noise but it isn't the white noise because the white noise somehow has infinite variance and here the variance is finite but as well as the white noise this process isn't defined in the classical sense that's why in order to understand it rigorously you should understand it some weak sense for example you should integrate for example you should integrate from 0 to t of psi tn and you expect just integrating this convergence that this process converges to eta t plus integral from 0 to t of gamma t gamma s ds but trying to to realize to find this convergence rigorously you failed in some sense because of the following proposition which states that for any for any function phi which should be just integrable the function which depends on time the distribution of the integral from 0 to 1 of phi of s psi tn ds converges weekly to the distribution of eta multiplied by the average of the function phi what does it mean? it means that look if I if I integrate this relation against the function phi then indeed here I will have eta times phi but I should also have the integral phi against the process gamma t but I do not see this under the limit and it's clear in fact because this process just oscillates too fast because it's completely decorated in time and the the oscillations aren't very large because its variance is finite so under the limit we do not see it this proposition that we do not need any regularity of the function phi so indeed these oscillations are rather powerful so this proposition gives some information about the limiting behavior of the process psi n but we lose a lot of information we do not see at all this process and our next question is ok, how can we observe this process under the limit so to this end let us however consider the integral of our process and yeah, due to this proposition this integral just just vanishes and but let us look at smaller order terms in this convergence and this what exactly we are doing in our in our theorem and in fact we find the influence of this process gamma t this smaller order terms which are governed by the centered Gaussian process that t so that's why such a statement of the theorem because I mean we don't want to lose the information that's why we need to look at the lower order terms well, now I hope that I more or less explain the setting I mean why do I have such strange setting of the theorem and now let me say something about its proof however to this end I need to give some preliminary results more precisely I will need to generalize this central limit theorem of question and LeBovitz I have 10 ok, that's well 5 general central limit theorem so let us note that the number of particles which hits the interval 0 and which you consider in this central limit theorem can be represented in the following form you just take the sum over all particles x so the sum over x of the indicator function 0,1 of x over n Indeed, because it's the same that takes the sum of the indicator function of 0,n over x and then it's clearly the number of particles hitting the interval 0,n well, and now let us consider any real measurable measurable function measurable function which is red and which has a compact support and which has a compact support and then let us replace this indicator function by the function f and we will obtain some random variable which I call SFN which is the sum over x of f of x over n and usually such relation calls statistics corresponding to the function to the observable f and it turns out that the central limit theorem holds not only for such a special choice of the function f but it holds in general so theorem 2 the first part if the variance SFN grows to infinity then goes to infinity then then we have that the distribution of the random process SFN minus expectation over sorry, not the random process, the random variable over the square root of the variance of SFN just goes to the standard Gaussian you see this indeed just the generalization of this theorem but for the special for the special observable f because in the theorem the variance as we need grows logarithmically this theorem was established by Soshnikov under some additional conditions and it's then generally still the present form by myself and Sasha and so the first, the second part of the theorem says the following it considers a different situation let us assume for the moment that the function f is sufficiently regular so it belongs to the Sobolev class h1 half plus h1 half plus epsilon for some epsilon then first it turns out that the variance of the corresponding linear statistics doesn't grow at all here the constant c is independent from n so we aren't in this setting and next the distribution of SFN minus its expectation converts to a normal law with parameters 0 and then you have some variance which you can calculate knowing the function f it's something like h1 half norm or the function f but modified slightly so you see that the center limit theorem holds without any normalization here just because the variance does not grow at all ok and now this is what I will need now finally I will turn to the proof of my theorem why will I write it ok I will write it here it turns out that the theorem can be proven just by applying two times these general versions of standard of central limit theorem chosen in an appropriate way theorem 1 let us put eta n to be equal time integral from 0 to 1 of psi Sn ds and Tn to be equal square root of n multiplied by integral from 0 to T psi Sn ds minus T eta n this is just the choice what I do and with such choice we get item 1 automatically but item 1 itself doesn't say a lot so what is about the convergence of item 2 so we need to show that eta n jointly with Zn converge to eta Z I will not show that joint convergence but let me show the convergence of two components separately so first let us show first the random variable eta n converges weakly to the random variable eta and where eta is from the theorem so Gaussian random variable with such parameters and then that the process that n converges weakly to the process Z and in fact the first part if you remember the first part follows just from this proposition we need to take phi to be equal just indicator of the interval 0, 1 but I didn't prove the proposition so let me instead prove this convergence honestly I mean the proof of the proposition and of this convergence is just the same and the proof turns out to be simple but that to prove A that eta n can be represented as Sfn minus the expectation of Sfn over square root of its variance where the function f this is just integral from 0 to 1 of the indicator function of interval ds to get it you just take the definition of eta n you take the definition of your random process whereas it is here you insert one to another and then you also take this relation you put it there and you obtain this formula you calculate and you find that the variance of Sfn infinity behaves like 1 over 2 pi squared logarithm of n that's why we are in the setting of theorem 2 the first part where the variance grows so that the central so this theorem implies the desired convergence that's it in order to get the second part I mean we argue in the same way that Tn equals to SjTn minus its expectation where is the function jT I will not write it precisely but it is something of the same spirit and why do we do not have normalization just because this square root of logarithm n on which we multiply this relation and then we apply the theorem the central limit theorem 2 part 2 and we get the convergence not the convergence B because this convergence holds in the space of trajectories but the convergence of finite dimensional distributions that Tn1 and so on that Tn for any times T1Td it converts to the distribution of the random vector that T1 and so on that Td where this is yeah and now it turns out to pass from the convergence of finite dimensional distributions to convergence of processes so somehow we use the problem of studying of the functional central limit theorem of irregular statistics irregular statistics which gives us the indicator of the interval 0n to the studying of the functional central limit theorem of statistics corresponding to the regular observable where the variance does not grow and it turns out that this investigation is much simpler and the behavior of trajectories for the case of statistics regular observable is very nice and in particular they are continuous then arguing arguing as when proving as when proving the classical classical functional central limit theorem for example for the sum of independent random variables you get the convergence B so that's actually the proof of the of the second part of the theorem and the third part you will get the rate of convergence you get automatically from the proof of the central limit theorem so thank you for your attention sorry I'm slightly late and