 Okay, so we start with a reminder of what we did yesterday, so let me just, so in fact before starting with the development of the general theory, I wanted to consider three examples. So yesterday we considered two of them, so the sine process of Dyson is the one we started with and so the point process, so this one as I briefly recall is the point process corresponding to the spacing between eigenvalues of the R-mission matrix, of the random R-mission matrix, so it corresponds to what Dyson observes looking at the Wigner semicircle, what he observes around himself, the configuration of eigenvalues is them, is the point process with the sine kernel, so the sine process, so this was the first example we treated yesterday, so the second example we treated yesterday and to which we will return later as this course progresses is the example of Gaussian power series, so this is a point process with the Bergman kernel, so the kernel is KZW1 over pi ZW bar square, so this is the point process with the Bergman kernel and this is the point process that governs the zeros of the random Gaussian analytic function, so ANZ to the N, so if I have random Gaussian analytic function, then the set of zeros of this function as I formulated yesterday, the beautiful theorem of Peres and Verag is a determinant point process with the Bergman kernel where the Bergman kernel is the kernel of the orthogonal projection from the usual space L2 of the disk with respect to the usually big measure to the Bergman space of functions holomorphic on the unit disk, so square integrable functions holomorphic on the unit disk, so and the theorem that I formulated obtained in joint work with Schu and Shamov is the following theorem, that in fact, I formulated in this particular case, but in fact it holds in full generality of the determinant point process, the theorem that we obtained says the following that if f belongs to the Bergman space and f in restriction to the configuration x is equal to zero, then f is equal to zero, so this is a theorem and in fact it is a more general theorem, I mentioned that Gauss obtained particular case of our theorem for the sine process just and we obtained it in full generality, so we shall see that in full generality the theorem says that reproducing kernels of a determinant point process generates the ambient Hilbert space, so in this case the Bergman space, the precise formulation will be given as the course progresses, so and I would like to give a third example, so let me reformulate in other words, so in other words, let me reformulate this, in other words the span of the functions kx over x and x almost surely coincides with the full Bergman space, so the statement holds was conjectured by Lyons and Peres who also Lyons proved such statement in discrete case and Gauss proved it under additional assumption of rigidity to which I will return in more detail later, now let me just formulate this statement and let me consider the third example which third example before passing to the general theory, in fact I would like to consider the discrete Bessel kernel which we heard in the talk of Tomohiro Sasamoto and just the discrete sine kernel, I would like to consider them in the way in which they in fact appeared, so I would like to consider now, so the discrete Bessel kernel and the discrete sine kernel the discrete Bessel kernel and the discrete sine kernel and the discrete sine kernel and the discrete sine kernel, Johansson and Berdino-Konkofalszczynski, so let us consider the young graph, so the young graph is the graph of young diagrams, they are arranged into a graph in such a way that the diagrams with n cells form layer n of the graph, so it's a graded graph and an arrow goes from a diagram to a diagram below if the diagram below can be obtained from the diagram above by adding one cell and indeed for some time it is only, so the young graph, first short section of the young graph looks like what is called Pascal graph, so it is either possible to add a diagram on the right or on the bottom, so as here, but already at level 4 this is no longer the case, so already on level 4 one sees that the young graph is much more complex than the Pascal graph, than the Pascal triangle, so and in fact this complexity of course only increases as the number of young diagram grows and we can recall the formula of Euler-Hardy-Ramanujan that the number of diagrams in the young graph is roughly exponential of 2 pi over 6 square root of n, where by the way this only means on logarithmic scales, so there are other terms also, okay, no excuse me, I think I said this incorrectly, no, I mean of course obviously on multiplicative scale the Euler-Hardy-Ramanujan formula holds, but the formula that I will write later only holds on logarithmic scale, excuse me, I misspoke, okay, so then this young space of young diagrams, one in Dau's with a plancherale measure, so and just to keep the exposition elementary let me denote by Dim Landau the number of paths, the number of paths of paths from the root, from the root of the young graph, from the root, so from the diagram, from the root to Landau, so of course we know that this is also the dimension of the irreducible representation of the symmetric group which, so the representation which corresponds to Landau, but again this explains the notation Dim Landau and this explains the formula which however can also be obtained in purely elementary combinatorial way that sum of Dim square Landau is equal to n factorial, so sum over Landau in yn, so here yn is the nth level of the young graph, so the nth level of the young graph, so then the plancherale measure is the measure which assigns to the nth level of the young graph assigns weight Dim square Landau over n factorial, so this is the plancherale measure, this is the plancherale measure that we shall consider and so the study of the properties of the plancherale measure is a fascinating subject of asymptotic representation theory and one of the first results here is the theorem about the limit shape of log and shape and Vershek Karev, so which says that in fact the shape of the young diagram of the plancherale young diagram, so there is for plancherale young diagrams an analog of the law of large numbers or we can also say an analog of the Wigner semicircle, semicircle law and I should say that while there has been very interesting work of Akunikov the reason why young diagrams behave like random matrices, well there are many experts in the audience who can correct me but to me remains obscure, so we shall see in this example that the ensembles are very exhibit a very similar behavior but why this happens again I cannot explain that. So the limit shape theorem of log and shape Vershek and Karev is the following, so I draw it will be convenient for me to draw the young diagram the Russian way, so this is the British way there also exists the French way which is obviously different but the most convenient will be rotating it by 45 degrees as this stresses the symmetries between columns and rows and this is what we shall do and so then the graph of the young diagram is graph of a piecewise linear function, piecewise linear function, so and the graph of the piecewise linear function is approaches as n goes to infinity and suitably rescaled, please allow me to skip precise formulations in this case also in fact I will give precise formulation of the next result and this will also cover this case, so rescaling the graph of the young diagram by square root of n by square root of n, so make rescaling by square root of n, one obtains the graph of a fixed function omega of t, so by the way function which lives on the interval from minus 2 to 2 just in the same way as the semicircle of Wigner and also by the way the graph of the function this omega t, this function let me not write the explicit formula but this is just antiderivative of the arc sign, this is antiderivative of the arc sign, so please observe that antiderivative of the arc sign if you think about the graph of the arc sign it is of course a continuous function as well our functions are by definition continuous nonetheless it has a discontinuity of the derivative in these points and yes and this is precisely the edge of the diagram, the edge of the diagram, so this is the transition from the bulk to the edge just in the same way as in the case of random matrices and let me just say, let me just say very briefly that what motivated Logan and Shep and Vershek and Kerr what motivated them to consider this problem, so it's a little digression which I allow myself so it is will not be used in what follows but just I think it's an interesting anecdote, so in fact they were solving what is called the ulam problem, so the ulam problem is the following problem what is the longest increasing subsequence in a random permutation, so one takes a random permutation and one is interested in increasing subsequence it is typical to say here something like when you board a plane obviously the thing that stops you is the person who is ahead of you but in fact you sit behind him or her and so before that person places the luggage you cannot move ahead so precisely this is what you know if everybody came in order then the plane would board immediately but in practice we know it doesn't so in short the longest increasing subsequence is in fact so to speak the sequence of passengers who arrive in order so the question of the longest increasing subsequence is the ulam problem is the ulam problem and in fact it was discovered by Hemesley in the 50s that the longest increasing subsequence has order at least square root of n and it was conjectured that in fact it is constant time square root of n and in fact it was proved that it is two square root of n and so I would like to point out that as I said so the theorem about the limit shape so yes excuse me before before I go into that one more clarification so it turned out that the problem about random permutations should be studied in terms of a problem about young diagrams through what is called the robinson chance of knut correspondence in fact there is a bijection there is a bijection between set of random permutations and set of young diagrams according to which the uniform measure on random permutations is taken onto the plancherale measure on the space of young diagrams so uniform measure and permutations is taken to the plancherale measure on space of young diagrams and this correspondence can be very vaguely explained in the following terms so imagine just that a group one can say of military people but maybe let's say mathematicians comes from lunch to this room but you know mathematical community is very hierarchical there is the most important mathematicians so number one number two and so on so they arrive and then take seats and so obviously the most important person has to sit in the corner and then the less important one cannot sit ahead of the more important one but they arrive from lunch in random order so maybe some not very important mathematician sits and takes the most important place so then obviously when a more important one comes he has to give his place to let his place to let the other one and so he moves and then there are so obviously when the whole process finishes there is a young diagram how the people are seated but in fact there is a path so you can say of course there is a path namely but there are in fact there are two paths there is a path order in which people arrive but there is also a path so there is Sherlock Holmes who is sitting under the room and looking but he doesn't see who arrives when he sees order in which seats are occupied so there are two paths and these two paths in fact determine the random permutation uniquely and this is the Robinson chance that Knut correspondence so just the but the point is that plancherel measure correspondence this is just what I said is maybe not complete proof but what I formulated is very precise statement that just uniform measure on permutations under the Robinson chance that Knut correspondence is taken to the plancherel measure on the space of young diagrams and it turns out in fact as the solution of the ulam conjecture showed that the plancherel measure is the right object so it is much more convenient to consider the plancherel measure rather than the uniform measure on the space of permutation because of much more structure than the plancherel measure has so and let me pursue my digression so in fact so as I said these teams so there was the team of Logan Shep in the United States and the team of Versa Kerov in the USSR and they were both working on this problem but in fact the Logan Shep only proved the lower estimate and Versa Kerov proved both sides so they proved that in fact the length is 2 square root of n and the difference and this is this is the story that I would like to stress the difference is an observation whose proof okay so they were working many years on this conjecture the difference is an observation which occupies one line and the proof also occupies one line so one line of the observation is the following that the plancherel measures form a mark of chain the plancherel measures form a mark of chain measures form a mark of chain so so it is possible I will not write them explicitly because we will not have time to investigate this mark of chain but it is possible to introduce transition probabilities on the young graph in such a way that the plancherel measure of level n is taken to plancherel measure level n plus one so there is a mark of chain a mark of chain of plancherel measures and in fact this as I say this is a remark which takes one line to formulate and one line to write the transition probabilities the verification is immediate but this is precisely what made the difference between lower estimate and upper estimate in the ulam problem so let me formulate the theorem of Versa Kerov is that the longest increasing subsequence has length 2 square root of n so longest increasing subsequence over longest increasing subsequence over square root of 2 over square root of n goes to 2 as n goes to infinity so goes it means improbability in measure in measure so it's a statement similar in spirit to the law of large numbers so there is law of large numbers okay so this statement they formulated and proved and this two is in fact the same two so this is this two yes so this is precisely so in fact this theorem is so is essentially corollary corollary with some additions is corollary of the theorem on the limit shape of the young diagram so in fact the longest the longest the longest excuse me the longest line is 2 square root of n so let me explain the difference so what is the difference precisely what is the difference between the limit shape theorem and the theorem about longest increasing subsequence the limit shape theorem does not exclude the possibility that the first line you know goes on for very goes on way beyond so it does not exclude this possibility because it does not contradict because the difference between rescaled graphs will be one one over square root of n it would not contradict the limit shape theorem so in fact the limit shape theorem and this is what Logan and Shep proved limit shape theorem implies lower bound and precisely for the upper bound the upper bound immediately follows from the mark of property the upper bound is immediate from the mark of property okay so after this Varshik and Karev once they saw probably moved by analogy most of all so they decided to investigate further random properties of the plancherale measure so in particular they proved they proved that the maximum of plancherale measure that the maximum of plancherale measure so they proved that the maximum logarithm of the maximum of the plancherale measure so over square root of n so square root of n is a natural scaling so we take minus logarithm so they proved that the maximum of this quantity over lambda so the maximum of this quantity is less than a certain constant. So please observe. So this is a theorem of Verschke-Kerf, theorem of Verschke-Kerf. So they observed, so that again by the Euler-Hardy-Ramanujan formula, the growth of the number of young diagrams, the number of young diagrams, grows as an exponential of square root of m. But the measure is of course non-uniform, the planchehel measure is non-uniform in the space of young diagrams. So the question is, how much does the fattest young diagram eat? So it is clear that there are some very lean young diagrams. So the young diagram which consists of a single column or of a single row has measure one over n factorial, so its measure is super exponentially small. But the question is, what about the fattest diagram? And so Verschke told me that there were conjectures that the fattest diagram is in fact super exponentially fat. So the fattest diagram eats a lot of the, so maybe one over some polynomial of n or something like this. So it's a lot of the weight. And in fact, with Kerr-Kerf they proved that this is not the case. This is not the case. The fattest diagram, still the measure of each diagram at least decays at least in a stretched exponential way. So the decays at least stretched exponential proved Verschke and Kerr. And then, so an obvious analogy in this situation is obviously a non-fair coin. So let us think about a non-fair coin. So let me actually write this, write this. So if I consider a non-fair coin, so just the probability of a word is just p to the number of ones and 1 minus p to the number of zeros. So how does the measure, how does the measure, is the measure in a non-fair coin distributed? So the measure in a non-fair coin is distributed in the following way. And this is law of large numbers, but in fact this reformulation is due to Shannon. And this is the Shannon asymptotic equidistribution of information. How is measure in a non-fair coin distributed? So there is, so the coin is unfair. There are 2 to the m, the total number of binary words is 2 to the m, 2 to the m. But in fact the measure does not live on them. The measure lives on a smaller subset, namely of subsequences of, on the set of subsequences of cardinality e to the hn where h is the entropy, so h is the entropy. So it lives on a space on a smaller, exponentially smaller subset. And in restriction to this exponentially smaller subset, it is asymptotically equidistributed. So again this is where this asymptotic equidistribution should be understood in logarithmic sense. So that is to say it lives on a set of exponentially smaller cardinality, so in fact cardinality which is between e to the h plus epsilon n and e to the h minus epsilon n. And the measure of each binary word is e to the minus hn. Again in logarithmic sense, the logarithm of the measure divided by n converges to h. So this is just the Shannon equidistribution theorem and it is a direct one line corollary of the Bernoulli law of large numbers, but it took over 200 years to formulate this one line corollary. So in fact Shannon's work is from 1940. So now, so Wersheck and Kerow motivated by this example conjectured that the Planche-Relle measure obeys a similar result. So here is the Wersheck and Kerow conjecture. So it was conjectured in 1985 and the proof appeared in 2012. So just that for, let me, there exists a constant h, say Wersheck and Kerow, such that for any epsilon bigger than 0, so the Planche-Relle measure of the set of diagrams such that the information, information is produced at raised square root of n, information. So I write the minus just in order to write positive quantities. Minus h is less than epsilon, so this probability Planche-Relle measure goes to 1 as n goes to infinity, as n goes to infinity. Okay? Equivalently, equivalently for any epsilon, for any epsilon bigger than 0, so I wrote for any epsilon bigger than 0, there exists a set yn epsilon in yn, such that the cardinality of the set, so such that the number of elements in this set is between e to the h plus epsilon square root of n, h minus epsilon square root of n, and the measure on each element of the set. So is it possible to see here or not so much? Yes? Yes? No? Yes or no? No. Okay, so let me write in different way. So 2, the cardinality of, for any lambda in yn epsilon, so the Planche-Relle measure of lambda is at least e to the minus h plus epsilon square root of n, and at most e to the minus h minus epsilon square root of n. So precisely like the, is it okay now or not, or not so much? Also not. Okay, let me write it somewhere else. Okay, excuse me. So equivalently, okay, let me make a blind spot. I think my blind spot, it grows from class to class. Okay, so equivalently, equivalently, so for any epsilon, so there exists a set yn epsilon, the cardinality of yn epsilon, h plus square root of n. And for any lambda in yn epsilon, the log of Planche-Relle, oh let me write it like without log, so Planche-Relle, n of lambda is less than e to the minus h minus epsilon square root of n, and greater than e to the minus h plus epsilon square root of n. Okay, so this is just the Verche-Keriff conjecture proved in 2012. And just let me say that in the proof, and this is what I am, this is the main point, in the proof precisely the main role is played by determinant structure, by determinant structure of the Planche-Relle measure. So and to, I will not expose the proof of the conjecture, I refer to the publication, but just the, I will however, I would like however to discuss the local structure of the young diagram. So here, in fact, again, Byke, Deft and Johansson, they were interested by the longest increasing subsequence, in fact, by the limit theorem for the longest increasing subsequence. And Borodin-Konkof and Dalshansky also discussed the behavior of the young diagram in the bulk. And for my purposes, I only want to discuss, in fact, the behavior of the young diagram in the bulk. So in the bulk. And so, in fact, so the observer places himself in a position, but in the bulk, in a position of the limit shape, not on the edge of it, but in the bulk, and asks himself, what does he see? So how to pose this question precisely, we need to assign a configuration to a young diagram, but this is easily done. So in fact, one can say that one, so a configuration will be a configuration on Z. It is sometimes more convenient to consider not integers, but half integers. So Borodin-Konkof and Dalshansky often consider half integers, because obviously, a half integer as opposed to an integer is either positive or negative. So it's more convenient in considering young diagrams, but in any event, integer, half integer, however one prefers. So and in fact, the graph of the young diagram, as I said, the young diagram drawn the Russian way, is a graph of a piecewise linear function. So the graph goes either up or down. So when the graph goes down, we put a particle. When the graph goes up, we put a hole. When the graph goes up, we put a hole. So graph goes down, we put a particle, graph goes up, we put a hole. So to a diagram, we assign, I have constructed, a bijection between the set of diagrams, all young diagrams, and a certain subset. So I have embedded the set of diagrams to a certain subset of binary sequences. There's a certain subset of binary sequences. There is, if I put a zero, it's a hole. If I put one, it's a particle. So let us observe that this set of binary sequences is quite special. Namely, this set is just the orbit, and by the way, this will play a role as this course progresses, is just the orbit, so which is the image, the image. The image is the orbit of the infinite symmetric group. So the infinite symmetric group is the group of all finite permutations applied to permutation, applied, excuse me, to the sequence corresponding to the empty young diagram. So the empty young diagram, if there are no cells in my young diagram, then the sequence will be like this. Particles in the positive half line and holes in the negative half line. And adding a cell corresponds to, adding a cell corresponds to, yes, it's the other round, thank you very much. Yes, thank you. Yes, the particles in the negative half line and the holes in the positive half line, thank you very much. So and the addition of a cell, quite clearly the addition of a cell, one can see it from the picture, the addition of a cell corresponds to a permutation of particle and hole. The addition of a cell corresponds to a permutation of particle and hole. And so the image of my set of young diagrams under this embedding is the orbit of the infinite symmetric group on this sequence. And in fact, we will consider the orbits of the infinite symmetric group in detail. So the fact that our objects invariant or quasi invariant under the action of the infinite symmetric group will play a role in what follows in the Gibbs property, which I discussed last time in the proof of the analog of the Gibbs property, which I discussed last time. And I would like just to point out that now I have a point process, namely a point process on Z. So point process on Z is just measure on collection of subsets of Z, which is the same thing as measure on collection of binary sequences. Okay. So and now the main, the very important step forward. And so now after these preliminaries, I'm ready to formulate the definition of the Bessel, of the Bessel point process. So the very important and amazing discovery of Johansson is that not the Planche-Rell measure, but the Poissonized Planche-Rell measure. So let me write this. So let me consider the Poissonized Planche-Rell measure. Poissonized Planche-Rell measure. Let me write it like this. Poissonized with some coefficient eta. So I write e to the minus eta sum from zero to infinity Planche-Rell n over n factorial eta to the n. So Poissonized Planche-Rell measure is in fact a determinant point process. So Poissonized eta. So this is theorem of Johansson. There are many proofs. So for example, Bodin, Konkof and Elchanski have a different proof and Konkof has a yet different proof. There are many proofs of this theorem. So that Poisson is a determinant point process. Is a determinant point process. Process. With precisely, with the discrete Bessel kernel. With the discrete Bessel kernel, let me write it down. Discrete Bessel kernel. Bessel kernel. So what do I mean by discrete Bessel kernel? So I write j theta of xy is theta square. Let me write it. J theta of xy is theta over, theta over j x plus 1 to theta. Theta j is the standard Bessel function. So just over x minus y. So this is the discrete Bessel kernel. Observe that the difference between classical Bessel kernel and discrete Bessel kernel is that here it is the index of the Bessel function that changes. It is the index of Bessel function that changes. Not the argument of the Bessel function, but the index of the Bessel function as opposed to classical Bessel kernel which we saw at talks several times yesterday where it is in fact so here the parameters x and y are the arguments, not the arguments, but the indices of the Bessel function. So what do I mean when I say that this process is determinant point process with Bessel kernel? So I can explain this quite explicitly. So this means that the probability that the following particles, so let me write the following particles n1 and l belong to n. So observe that here, so here in the discrete setting I don't need the complicated formalism of correlation functions. I just consider cylindrical sets in my space of binary sequences, but cylindrical sets of a special form, namely cylindrical sets of the form that particles belong to my configuration, some particles belong to my configuration, and this is in fact just the determinant of this kernel, j theta square n i n j, where eta is theta square. So i j goes from one to l. Excuse me, question? Again? P is the Poisson's Poisson's Poisson's measure. Yes, P is the Poisson's Poisson's Poisson's measure. Yes, yes. Thank you very much. Yes, Poisson's Poisson's Poisson's measure. So this is the Poisson's Poisson's measure. So more precisely the image of the Poisson's Poisson's measure under this embedding. Okay, so after this formulation of this beautiful theorem of Johansson, there is still the question of taking the limit, of taking the limit. So the theorem of Johansson, so please observe. Again, we can see in this example the importance of the determinant property. In fact, the formula is complicated, the interaction between particles is complicated, it's given in this determinant way, but observe that all the dependence is encoded in one function of two variables, namely the kernel, the kernel which generates the determinant open process, this one. So observe also that, of course, Johansson's theorem it contains information about all plancherelles measures. So because in fact, well, we took the Poisson's Poisson's Poisson's measure, but of course if we want to have plancherelles measure of index n, one should just take the parameter eight equal to n. It is standard that Poisson's distribution lives in neighborhood, Poisson distribution lives in neighborhood of its average value, so of the parameter of the Poisson distribution. So it is concentrated, and there is central limit theorem, so it is concentrated in the neighborhood of its average value. And so in order to obtain information about Planche-Relle measure of order n, one should take eta equal n, and then if I want information in the bulk, if I want information in the bulk of the young diagram, then I need to study this Bessel kernel in the situation in which the parameter theta is square root of n. And x and y are also of order square root of n. But this is a very classical asymptotics. So the asymptotics of Bessel function, so J, so let me keep my notation, so the asymptotic of J x of 2 theta, when theta over x, so 2 theta over x tend to some constant a. This is classical asymptotics, it is called the Debye asymptotics. Debye asymptotics. So by the way, Debye got Nobel Prize for this, so Nobel Prize in Chemistry for this asymptotics. So and just what is the term in the asymptotics, well it is the sine function. It is the sine function. So the sine function. So after this preliminary discussion, we are ready to formulate the theorem of Borodino-Kuhnkhofen-Dalshansky. So to formulate the theorem of Borodino-Kuhnkhofen-Dalshansky, which says that the Planche-Relle probability, so the Planche-Relle probability of the event that in positions a square root of n plus, let me write, I wrote n1, but in fact maybe I wasn't wise in that, so let me write k1. a square root of n plus k1, a square root of n plus kL, tends as n goes to infinity, well to the determinant, obviously tends to the determinant because a determinant must tend to a determinant. So the whole game is about convergence of kernels, actually excuse me, no, I was wise the first time, so I keep the n's, excuse me, so these will be the n's, excuse me, and those will be the k's, excuse me. So the n's, excuse me, the n's are chosen in this form, n1 is a square root of n plus k1, n2 and so on, and al is a square root of n plus kL. Okay, so a, this is important, a is strictly between minus 2 and 2, and in fact, by the way, this is also clear in the Debye's Symptotics, so if one approximates the Bessel function of argument, of index which nearly coincides with the argument, then in fact, the, excuse me, then in fact, the asymptotics of the Bessel function is no longer governed by the sine function, but in fact is governed by the airy function. So this is very clear on the level of corresponding equations because the equation degenerates, and so somehow one develops a singularity, so it's, it doesn't converge to the equation for the sine function, but converges to the equation for the airy, the equation, the Bessel equation converges under this limit not to the, well, standard equation x to dot equals x, which gives the sine function, but to the airy equation, and so one obtains the airy function. Okay, so in any event, so I continue, so this is determinant s alpha and i and j, k i, excuse me, k i, k j, excuse me, i j, one to l, so, yes, so k i, k j, and so what does it mean s alpha? So s alpha of x y, so it is quite similar, it is in fact a formula completely similar to the sine formula of Dyson, so there is a connection between alpha and a, so a over two is cosine alpha, yes, a over two is cosine alpha, so in fact we can see that as we approach one, as we approach one, so alpha, so as we are in zero, alpha is pi over two, this is good, as we approach one, alpha goes to pi, in fact, excuse me, alpha goes to zero, excuse me, as we approach two, alpha goes to zero, and in fact there are fewer and fewer particles, as we approach minus two, so alpha goes to pi, and in fact there are more and more and more particles. So OK, so one can see that the formula makes perfect sense. And just this is the theorem of Brodyn-Kunikov and Dalshansky. So in the limit, in the bulk of the young diagram, in the bulk of the young diagram, what I have is the determinational point process with the kernel, which is called discrete sine kernel. And by the way, quite in analogy with the, this will be important as we proceed, quite in analogy with the continuous sine kernel. So the operator S alpha is a projection in L2. So I have to take my function, it's in L2 of Z. So I have to take my function, I have to take its Fourier transform, which is now function on the circle. I have to restrict it to the interval. So a circle is imagined in additive notation. I have to restrict it to the interval from minus alpha over 2 to alpha over 2 and take the inverse Fourier transform. So it is, in fact, a spectral projection just as the continuous sine kernel is indeed a spectral projection. So let me just say that, so how let me just say very briefly in the little time that remains to me, how from the discrete sine kernel one proves the Versa-Karov conjecture. In fact, so the Versa-Karov conjecture, the proof, the proof is based on a variational formula. So in fact, the proof of the theorem of log-on-chap and Versa-Karov, both of them did it this way. The proof is based on a variational formula. So the dimension of planche-Relle measure is represented as a certain functional for which they compute the extremal, and this is the function omega, but for which they also compute the quadratic variation. And so this is the variational formula and the quadratic variation is expressed in as a sum of several terms. So it is very interesting to point out that the Sobolev norm, the Sobolev one-half norm of the function comes into the game. Sobolev one-half norm of the deviation of the graph with the limit shape enters the game, but also various, so since the dimension of, the dimension lambda is given in terms of the Hooke formula, the quantities that come into just determination of this function. So if we write down, just write down explicitly, this function, write it down explicitly, we obtain a certain expression which will involve in particular the local characteristics of young diagram. What do I mean by local characteristics? I will formulate this precisely. For example, what Kerov called number of corners in young diagram. What does it mean, number of corners in young diagram? Number of corners means, well, number of corners. So this is a corner. So if you wish, young diagram encodes a partition, so this is number of distinct summons in the partition. So this is number of combination, particle, and then whole. So corners, so it's a combination, excuse me, whole and then particle, so this is a corner. This is a corner, so number of corners. And in fact, one can prove that number of corners, according to Planche-Relle measure, converges to a constant. So this constant can be computed. So number of corners, obviously, one over square root of n, number of corners converges to a constant. So also, well, there are corners, there are also particles with Hooke lengths K, so then there can be given, so number of situations like this. So when there is configuration at distance K whole and particle, so number of configurations when there is a whole, then distance K, and then there is a particle. So this over square root of n converges, it's possible to compute. Okay, there we go. So, yes, and I would like to point out so how does one obtain this kind of formulas? How does one obtain this kind of formulas? So as the Borodino-Konkof-Alchansky theorem shows, the Planche-Relle diagram looks like sign process, but in different positions of the diagram, one gets different sign processes. So the parameter of the sign process slowly moves as one moves along the young diagram. So, in fact, what one needs to, when one computes these asymptotic quantities, what one needs to do is to take the average of this quantity according to the sign process and then average again under the parameter alpha. Under the parameter alpha, and this is how one gets these explicit formulas, and then one needs to prove that, in fact, the number converges to its expected value, and this has to do with the fact that the determinant point processes have small variants, a property that we already discussed in the first class, and we'll discuss in great detail tomorrow. So just the fact that somehow these different parts of the young diagram are, one can say in first approximation, independent. So the variance grows very slowly. So, and then, in fact, they're not independent, they're negatively correlated. So, and then just one obtains this convergence to a constant, and then there are also some non-local terms which require separate analysis, but I skip that. And then from this convergence of local statistics of a young diagram, it is possible to prove such result for any local statistics, any local statistics of young diagram from this one arrives at the proof of the Versailles-Kariv conjecture. And please allow me to conclude with an open problem. So number of corners converges to a constant, but what about the limit theorem for this quantity? Observe that this does not fall into the context of the Soshnikov limit theorem because it is not an additive functional. It is a polynomial functional of, so it's a function that depends of two positions, not one position, but two position. And, well, experts in the audience can correct me, but to the best of my knowledge, such limit theorems in general are not known. Thank you very much.