 Alexander, are you ready? Yes. So Alexander Bouffatov will now give his third lecture on determinant point processes. Thank you very much. We will now discuss a little bit the general theory of determinant point processes. So let us recall that last two times, we treated three examples. So the example of the classical sine kernel of Dyson. So and let me again recall in this connection that the projection, the sine kernel, is a spectral projection. So it acts in L2 of R. And it sends a function. So one takes the Fourier transform, one restricts the Fourier transform to minus pi pi, and takes the inverse Fourier transform. And also let me point out that the sine kernel, by its very definition, by expanding the sine, so it has so-called integrable form. So integrable form in its greatest generality means that the commutator with the operator of multiplication by the variable has finite rank. But in our situation, when I say integrable form, I will mean, so the form, maybe it's better to say Christoffel-Darbou form. So the form that in fact looks like the Christoffel-Darbou formula. So here it is. And does indeed come from the Christoffel-Darbou formula. So the sine kernel, as I mentioned in the first class, arises by taking the limit of Hermit polynomials. So in the Christoffel-Darbou formula, one gets exactly this. So OK. So then we discussed the completely. So OK, let me go a little bit uncronologically. Then we discussed the discrete sine kernel. So the discrete cousin of this kernel. So there is observe a difference between the two in that the continuous sine kernel is only one. So I could put a parameter here. But in fact, all these parameters are taken one into another by homotetism. So the discrete sine kernel, excuse me, the continuous sine kernel is only one. Whereas the discrete sine kernel, it's a family. It's a family. It's one parameter family. And again, I have a alpha to f. It's again the same f hat minus alpha over 2 over 2. But this formula is written on the torus. So this function is an L2 of r mod z. So again, by its very definition, the discrete sine kernel has integrable form. And in fact, so let me immediately make an observation that will be important as this talk proceeds, especially for the discussion of the Gibbs property. Let me make the observation that at the corner of this, the cornerstone for this integrable property is the following property, which maybe it's possible to call weak division property for my projectors. So these are projection operators, this one and this one. And the weak division property. So if I have a space L, so it's a Hilbert space. Hilbert space. So Hilbert space, admitting a reproducing kernel, so what do I mean by Hilbert space admitting a reproducing kernel? I mean that value of function at a point, so admitting a reproducing kernel, let's say, let's denote it pi xy. So what I mean by this, I mean that for f and L, the value of function of f at a point, f of x, is equal to the inner product of f, let's write f of y, is equal to the inner product of f and the reproducing kernel pi at y. So in other words, this just means that projection onto L is given by a kernel, a situation that we will consider. So pi is the inner product of dy. So precisely the projection is given by a kernel and this precisely means that the operation of taking the value of a function at a point is given by taking the inner product with the kernel. And so if I have such reproducing kernel, Hilbert space, then the weak division property is formulated as follows. So that if f is in L and f, the value of f at some function is equal to zero, again, why I can take the value of function at a point because of this, the value of function at a point is a continuous functional. So then f over x minus p belongs to L. This is the weak division property that I use. So it is not very difficult to check the weak division property, for example, for the sine kernel. So let me check it for p equals zero, the general case is the same, for p equals zero. So let's see, let's go to the Fourier side. So what does it mean, let's go to the Fourier side. What does it mean, f of zero equals to zero? It means that the Fourier transform has zero average. So a function has zero average and it belongs to the range of L, it belongs to minus pi pi. So a function of zero average belongs to minus pi pi. So what does it mean to divide by x? It means to take the antiderivative. So if a function has zero average and is supported on minus pi pi, then it admits an antiderivative, which is again supported on minus pi pi. So, and this is proved for the sine kernel. But in fact, it is possible to prove, it is possible to prove this property. So any kernel, so remark, so let me make it to remark, remark one, remark one. Any kernel, any kernel of the form x, b of y. So I will call it integrable form, even though the word integrable form is used in more general context. It goes back to the work of Izergin-Karapin-Slavnov, Izergin-Karapin-Slavnov, but I will use it in this specific setup. So any kernel of this form, of integrable form, obeys the weak division property, obeys the weak division property. So this can be checked. And also remark two, so which is joined from joint work with Roman Romanov. So from 2018, also that the converse holds. So if L, a reproducing kernel space, obeys the weak division property, the weak division property, property, then in fact, the pi is integrable. So pi has the form, let me denote it, star. Pi has the form star. And furthermore, and furthermore, and L is a space of holomorphic functions. A space of holomorphic functions. So I'm not saying, so these are spaces of entire functions, but in fact, I'm not saying that, and in fact, it's not true. So is a space of holomorphic functions. I am not specifying on what, because in fact, this cannot be specified as it's shown by the example of the Bessel kernel. So it is not space of entire functions because the kernel might have singularity at some point, like precisely the Bessel kernel. The Bessel kernel, which we saw in the talk of Tomohiro Sasamoto, has power singular, I mean the continuous Bessel kernel, not the one that we did in class here. So the continuous Bessel kernel, it has power singularity at zero, so it is not an entire function. It is still a holomorphic function as the neighborhood of the domain of definition. And so the statement still holds. So in fact, it is an if and only if, so it is a space of holomorphic functions. In fact, for holomorphic functions, for space of holomorphic functions of such type, the division property is quite clear, in fact. Okay, so this is a characterization of these operators having integrable form, which will be important as our discussion proceeds. And let me formulate a very naive, open question. So the result that we proved with Roman Romanov, it's a very short proof, it's a 10 page note. Just we, when we proved it, we proved it with thought, we proved it both in continuous case and in discrete case. But in discrete case, we quickly found a mistake. So, and we did not put it in the note. So you will see that for discrete case, the part, so the part about integrable form, it's completely general. It's abstract, abstract. It's abstract, essentially algebraic identity with resolvents. But with space of holomorphic functions, while it is true that all examples in discrete case, which at least I know, come from Bessel functions or Gamma functions or some classical functions. So the result of a space of holomorphic functions we don't have in discrete case. And so this is a question which seems to me should be completely realistic to solve. So is the second claim true in discrete case? Is the second claim true in discrete case? We do not have proof of that. So question, is the second claim true in discrete case? In discrete case. Okay, so third example which we considered today and which we considered, excuse me, during first class and which I will not consider in detail today is different sort of example is the Bergman kernel and we will return to the Bergman kernel later. So this example I leave a little bit aside for today. But I want to start today by in fact, as I mentioned, developing some general theory. And so in the first place by proving an existence theorem for determinational point processes. So let us recall before proving the existence, let us recall what is it that whose existence we prove. So we say, so we pick K, a locally traced class operator. So okay, let me start differently. So we pick some space E. So in our examples, E will be R or maybe C or maybe Z. So some very simple space, but in general we just say that E is a Polish space. So and mu sigma finite measure on E. Sigma finite measure on E. So K is an operator, is a locally traced class operator. So again, let us just naively one could just say that K is a continuous function in two variables. Locally traced class operator acting on, acting on L2 of E mu. So, but again, please, if this terminology is not clear, so how do I say, simpler version, simpler version K continuous functions two variables. K continuous function of two variables, continuous in two variables. Let's say bounded, bounded continuous into variables. Sometimes we will consider unbounded cases also precisely as in the case of best cell process, but this is not very important for the moment. So generally locally traced class operator acting on L2 E of mu. Okay, and then as I mentioned during the first class, a point process is called determinants. So a point process P, that is to say a measure a Borel measure on the space of configurations on the space of configurations of E is called determinant. If the L's correlation function, so let's write the L's correlation measure is the determinant of K xi xj product D mu of xi. So I will also, as today we progress, I will also give different definition of a determinant point process, which will be more convenient, equivalent, but more convenient, which avoids the use of correlation functions. Let me just first, as we start with this, let me just first say, so my first aim for today is to discuss the question of existence of such process. Why does such process exist? And let me point out that the non-triviality of this question is underlined by the fact that the sine process of Dyson has been formulated, has been written down by Dyson in the 60s, but its existence is a theorem of the new millennium. So a conceptual framework for its existence was creating the new millennium. In fact, the French physicist Dillemacchi, so she's the pioneer of the, in 75, she started the study of determinant point process, which she called fermionic point process, but she proved a version of a theorem which I will now formulate, but she did not prove, in fact, the result she proved was, did not go far enough to prove the existence of the sine process. So in fact, it took quite a while to formulate some conditions which I now will formulate. And also let me say that the theorem I will formulate now, the theorem of Makisoshnikov, also proved by Shira and Takahashi, so this theorem does not is a very nice sufficient condition, but it is not a criterion. So in general, the question remains open. So the question, under what assumptions, when does what kernels K induce, what kernels K induce a determinant point process? Induce a determinant point process, this question remains open and there are important examples, in fact, one of them we briefly saw in Lachler-Erder Stock, the Dyson Brownian motion. So there are important examples which are not covered by a general conceptual framework. So the examples coming from non-self-adjoint kernels, there does not exist a general framework to prove existence of such processes. Okay, so let me formulate the theorem of Makisoshnikov, Shira and Takahashi. Okay, so theorem, as I said, weaker version of this theorem was formulated by Odilmaki in the 70s and then 25 years elapsed and Soshnikov completed the program of Makii and also simultaneously and independently, Shira and Takahashi proved the same result. So a theorem is the following, that if K, so K-ermission, contractive, then PK exists. So as I mentioned, if it exists, it is unique and in fact, this I will also explain in a different way right now with this. In fact, I will start. So the moment problem is well posed in this situation. So if it exists, it is unique, but the main question, does it actually exist? So the theorem is that it exists. So the theorem that Makii proved initially was like this. So, but in fact, in most important examples, in most examples that we consider, it is in fact the projections, projections that emerge. So also let me say that, so let me repeat that in many examples, such as the Dyson Brownian motion, the kernel that induces it is not self-adjoint. It is not self-adjoint and for such cases, there does not exist a general framework, only some examples. And finally also, let me mention that determinant point processes have a cousin, have a cousin where instead of the determinant one, writes the puffin. So, and the puffin cousin is much poorer than the determinant one. So in fact, in puffin case, there do not exist general results of this type. So there is no, so there exists, there is work of Kargin, but the examples do not satisfy these criteria. So there exist results, but which do not prove the existence of examples. So sign process has a puffin cousin, the puffin sign process, but its existence cannot be proved conceptually. It can only, the existence can only be established by a limit transition. So in puffin situation, in puffin situation, there does not exist analog of Makisoshnikov-Shiraita-Kahashi theorem, just which I also think stresses underlines its non-triviality despite the fact that the proof is very short. So the proof can be given in two lines. And proof can be given in two lines. Okay, so before I proceed to the proof, let me discuss the question of uniqueness. And in fact, to address this question, it will be convenient for me to reformulate, to reformulate the determinant property. And in fact, it will be convenient to me to introduce a formalism which I will use a lot, which I will use all the time. The formalism of multiplicative functionals. The formalism of multiplicative functionals. So multiplicative functionals of point processes. So the point process is a measure on configurations, is a measure on configurations. So as we discussed, configuration is itself a measure. So it's a measure on measures. So it is very useful naturally. If I have a measure on measures, it is very useful so the quantities that I want to compute are precisely integrals of some functions with respect to my underlying measures. Saying this in a different way, it is very useful to consider additive functionals of the space of configurations. So an additive function of the space of configuration is just expression of this form. So let me recall the notation that space of configurations is the space of subsets or is the space of subsets. Each configuration is a collection of particles. So and just I sum values of a function over the particles. So maybe just not to ask oneself why the sum converges, let us assume that f has complex support. So this is additive functional. Editive functional and for example, in particular the expectation of the additive, the expectation of the additive functional is of course the integral with respect to the first correlation measure. So this is just a, essentially it's the definition of the first correlation measure. So it is however very useful to consider Laplace transforms of these quantities or in other words, to consider the multiplicative functionals in Soviet literature there also existed term bogalubu functionals. So psi g of x, I will consider products of values of a function over particles. So assuming that g for example, just for the moment assuming g such that g minus one has complex support. g minus one has complex support. So then this product converges and then this product converges. So and now I can reformulate in these terms, I can reformulate the determinant property and in fact I will write it and it is more convenient for me. So I will reformulate it in the following way. So I will write that the expectation of a multiplicative functional is equal to the determinant one plus g minus one k. And so g supported, g supported, g supported, g minus one supported in some bounded b supported in b. So I regularize determinant by putting high b here. So this is definition of determinant of a process in terms of multiplicative functions. The advantage of this definition is that it is clear that such process is unique. So there is no question that such process is unique because in fact a multiplicative function, so why do values of multiplicative functions determine a point process uniquely? It is because the following quantity is a multiplicative functional. I write z one, well, z two, hash b two, z l, hash b l. Let us recall here that hash b is just of a configuration, is just the intersection, the cardinality of the intersection of x and b. And just the, so as we discussed in first class, a measure on the space of configurations essentially by definition a collection of joint distributions of these random variables. By the way, a question that I did not discuss in first lecture and the question which is quite non-trivial is what conditions must these joint distributions satisfy? So there are obviously conditions, consistency conditions, but I'm completely skipping this. So it is, so what conditions must correlation functions satisfy in order to induce a point process? So this is non-trivial question. This is non-trivial question. There are some conditions developed by Lenard in Princeton, but it's a non-trivial question. I really don't even want to discuss it in greater detail. One can look at the survey of Soshnikov precisely where this theorem is proved from 2000 in the Russian mathematical surveys. They're all written down in detail and just somehow to illustrate how it is very difficult to use them. So in any event, it is clear that such quantity determines the joint distributions uniquely and so in particular determines the point process uniquely. Determines the point process uniquely. So precisely knowing the values of expectations of multiple electric functions determines the point process uniquely, which is what I wanted to say. But again, this does not give any clue about existence. This does not give any clue about existence. It is very difficult to check whether such point process exists and indeed, again, if instead of determinant, I write profit, I don't know any way of checking this. Also, let me say a few words about the determinant. So in this method of writing, this is just the very usual Fretholm determinant. At the same time, it will be more convenient for me to use not the fret. So the disadvantage of the Fretholm determinant is that the Fretholm determinant is only defined for trace class operators. So this operator has to have finite trace in order that one be able to write the expansion of Fretholm determinant as a power series. But in fact, it is more convenient to work not with the Fretholm determinant but with the Hilbert-Carlemann regularization of the Fretholm determinant. And in fact, so let us write. So in many examples, I will use the notation that one plus some operator. Let me write is exponential of the diagonal of the kernel of the diagonal values, d mu of x, times what I call determinant two of one plus r. So what does it mean determinant two of one plus r? Let me write this here. So this is precisely the Hilbert-Carlemann regularization. Determinant two of one plus r is in fact e to the minus trace of r determinant one plus r. So let me explain what, so you might say, what have I achieved by multiplying and dividing? I have achieved something because this expression determinant two is defined. So as opposed to this expression, this expression is only defined if r has finite trace. If r is a trace class operator, so each multiple is defined. Their product on the other hand, so if one thinks of determinant as, by what's called Litzky theorem, by Litzky theorem if one thinks of determinant as product of one plus eigenvalues. So this expression is defined if sum of squares of eigenvalues converges. So that is to say if r is Hilbert-Schmidt, if r is Hilbert-Schmidt, this whole expression. So this whole expression is defined for Hilbert-Schmidt operator rather than for finite trace operator. So, and indeed, so the point of Hilbert was to define determinant for Hilbert-Schmidt operators. And so this is regular as determinant which is defined. And this expression is often, can often be defined even in situations when the operator is not finite trace. So precisely look at this operator. So without this. So it's, this is defined and this is defined. So, yes, so this operator is clearly, this operator is clearly Hilbert-Schmidt if G has complex support. It is however much less clear. In fact, I don't know how to prove it. I don't know how to check it under what assumptions such operator has finite trace. And so there are two ways out of it. As I said, one can add that guy B, but it seems to me that more convenient way is to understand the symbol determinant in this specific sense. And then the formula makes sense. We shall see that this formula is much more convenient for some reasons. This formula is much more convenient than the one before. Of course, it is also possible to consider a symmetrized version. Why I can also write determinant one plus G minus one K. Okay. So this much, so in order to take determinant one plus something self-adjoint. So, okay. So this much for different definition of the determinational point process and also for the question of uniqueness which now we have completely covered. So now let me go for the proof of the theorem, proof. So, and the proof is in fact inductive. The proof is in fact inductive. And let me start with operator of rank one. So first case, rank K equals one. So let me start with, and let me start with the projection. So K is one-dimensional projection. So K is one-dimensional projection. So that is to say K is the operator, KF is F phi phi. This is operator of one-dimensional projection. In this case, so what does it mean if one looks at the formulas carefully? So in fact, maybe it's easier to look at the formula. So by the way, let me, excuse me, let me say that this formula is quite equivalent to the formula with correlation functions. This follows just by expressing the determinant as a series, expressing the determinants as series of traces and taking, for example, g in the form one plus epsilon, something and expanding in powers of epsilon, one gets precisely the formula that was there before. So, and in definition of correlation functions, it is clear that all the determinants arising in the definition of correlation functions will be zero except the first one. So there is only one particle and it is clear by, again, KXX, KXX is equal to phi square. It is clear that there is one particle distributed with probability density phi square. Already here, let me make an observation which will be important in what follows that observe that the function phi does not have a meaning in terms of point process. So the square of the function phi has a meaning but the function phi itself doesn't. In fact, if I change function phi, if I multiply it by some function of unit, or function of unit norm, of function taking values in the unit circle, then the process will not change. So there is a significant, so a kernel uniquely defines a process but the process does not uniquely define a kernel. So the process does not define a kernel uniquely. There is some freedom in choosing the kernel. So there is the freedom of this type multiplying by something and dividing by something. Again, it is clear from the definition of the determinant that if K is multiplied by some function and divided by the same function in the other variable, then the determinant doesn't change. So these determinants just do not change. But it is an open question. For example, if there are two self-adjoint kernels which induce the same process, is it true that one is taken into another by such gauge transformation? This is not clear. This is not clear. Or when do two kernels induce the same process? This is not clear. Okay, so, but in any event, here it is, here is the case of rank K equals one. So okay, let's do rank K equals two. A equals two. So KF is F phi one, phi one plus F phi two, phi two, phi one and phi two are orthogonal. So then, again, one can check the first correlation function well is what it is. The second correlation function is just, so, the determinant in fact. So the second correlation function will be just the determinant phi one of X, phi one of Y, phi two of X, phi two of Y square. The square in absolute value, okay, square. Very nice. So this is a point process with two particles and with this density. So it remains to check, but it can be checked that you, it checked in the way which I explained in the first class when we discussed the correlation functions of the, when we discussed the correlation functions of the Gaussian-Unityor ensemble. So one can check that the process with this density, with this density, if you integrate in the second variable, you get the first correlation function. The first correlation function. So the first correlation function will of course be phi one square plus phi two square. Okay, and so forth. So one has to check, it's a verification that has to be done, that distribution on pairs of particles with this density in fact has this first correlation function, but this is immediate. Same for three, for four and so on. And so the last step which I will, the details I will leave as an exercise is that now approximate, approximate a general, approximate a general projection K, projection K by finite dimensional ones, by finite dimensional ones, and obtain the process. And obtain the process. So in fact, any not only, not only self-adjoint projection, but in fact any self-adjoint contraction can be approximate. So why I skip the details, because the details while not difficult are somewhat messy. So approximate in what sense, uniformly and compact sets. How does the measure converge? The measure converge in vague sense. Why, so why is this enough? Because these are probability measures and so the end measure will also be a probability measure and so on. So there are some details to fill in, but in fact they, again, no difficulty, but some routine details, but this gives the proof of the theorem of Maki Soshnikov, Shirai and Takahashi. So I should say, by the way, that Soshnikov's approach was a little bit different. In fact, Soshnikov started with a, Soshnikov following Maki started with a contraction. So let me, by the way, let me pursue a short digression. So a short digression, just how originally the theorem was proved and by the way, this short digression will allow me also to motivate this formula. So digression. So let us imagine, so let us imagine that we have a matrix L, some matrix. So I will use the symbol L, so this digression, I use the notation which is used by Borodin-Dalshansky, but in fact I will, it's one and only time that I denote matrix by L, so they call it L process, L process because in fact I will use the word L for subspace routinely. Okay, but anyway, so just for this one minute, L is a matrix as Borodin-Dalshansky write it. So L is a matrix and so let's say positive definite, positive definite, positive definite, and I give the probability, so matrix, let's say M times M. So probability measure, I give a probability measure on one M, on, excuse me, on the set of subsets obviously, on two, on the set of subsets of the set one M by writing. So how do I give it, right? P of x is determinant of one plus Lx. So in other words, we start with the formula determinant, oh excuse me, determinant Lx, excuse me, I mis-wrote, determinant Lx, so excuse me, determinant Lx. So start with the formula determinant one plus L is equal to sum of determinants Lx. So and realize that to have probability measure, in fact I have to normalize by determinant one plus L. So like this I get probability measure, like this I get probability measure, okay. So like this I get probability measure, and so now it is quite possible to check, so now one can check that the expectation, so this is what is called L process, so I can check that the expectation of multiplicative functional, again by writing this formula, will be equal to determinant one plus GL over determinant one plus L, precisely by the same, so because in fact by definition, this is sum C of x determinant Lx over determinant one plus L, this just like this. So in fact, but this in turn can be written down as determinant one plus G inverse K, where K is L over one minus L. One can write inverse either the way you like, I mean obviously they can be used. So end of story, so this motivates this definition except this formalism is more general than this formalism. One plus L of course, excuse me, thank you very much, one plus L, thank you very much, yes, in fact, precisely thank you very much, so it's L, thank you very much, yes. It's L that is K one minus K, yes, thank you very much. Yes, so I mis-wrote, yes, so in fact, precisely, thank you very much, yes, so this formalism is more general than this formalism because here K is precisely a strict contraction, so as one can see, K is a strict contraction, and so in fact the proof of Soshnikov, it works by taking this K, multiplying it by one minus epsilon, finding L, and then proving convergence for the limiting K. So this is the proof of Soshnikov, starting from here. Okay, very good. So now that we have proved the existence of determinative point processes, I would like to formulate a very simple proposition but which will be quite important for what follows. A very simple proposition about behavior of multiplicative, behavior of determinative point process when multiplied by a multiplicative function. When multiplied by a multiplicative function. So in fact, it is more or less clear here. So if I have process of this type, already one can see it here. If I have process of this type, I started to erase a little bit too fast. Okay, if I have process of this type determined by this kernel L, it is clear that if I multiply the kernel L by function G, it corresponds to multiplying my process by a multiplicative functional. So, excuse me, so let me write it. L, so some process, L process, L process P. So then G L, psi G P, with normalization, with normalization. So, what I want to explain now is that this simple remark can in fact be carried over in generality of determinative point processes that the determinative property is preserved by multiplication by multiplicative function. Okay, so and I pass to this remark, so this proposition, a proposition from 2012. So, multiply K, taking the product of a determinant measure is and a multiplicative functional of a determinant measure. And a multiplicative functional is again determinant. And a multiplicative functional is again determinant. Functional is again determinant. Is again determinant. So, and of this simple proposition, let me give a proof. So in fact, it will be clear why I wanted to use this, why I wanted to use this definition of the determinant. In fact, let's consider a measure psi G. So, let proof, so proof. Let K induce, let a kernel K induce the determinant point process P K. Maybe by the way, I did not process E P K P. So we write, we write P equals P K. Again, this notation, it's important to keep in mind that K determines P K, but P does not determine K. So the notation sort of works only one way. Okay, so psi J P K. I want to prove that this, I aim to prove that this is determinant. Okay, induce P K. So let and let G. So let psi G correspond to G, be multiplicative functional corresponding to G. Be multiplicative functional. So let psi G be product of. So okay, so to do this, I take one more multiplicative functional psi H. And okay, very good. And I have to divide by normalization psi G. So this is integration of course over space of configurations. So then I write one plus G H minus one K over one minus G minus one K. And again, so then this is equal in fact to one plus H minus one G K one plus G minus one K inverse. So one can check by, if this is just an identity with determinants. So and this is the new kernel. So let me point out. So is it possible to see from this blackboard? Is it okay? Yes, for some reason I always end up here where it's difficult to see. Is it okay? Yes, is it visible? So they, okay. So they, this is just identity with determinants. Some effort is needed to justify this multiplicativity property, but it can be done. This multiplicativity property for these regularized determinants, but it can be done. And so this is the end of the proof. Yes, let me point out that this operator of course is invertible because this otherwise, so this determinant has to be non-zero. So I have the right of inverting this operator. So and just the proposition is proved. Let me make a remark that it will be very important for us in the discussion of the Gibbs property, so probably tomorrow at this point. It will be very important for us to take conditions, to study conditions when these multiplicative functionals can actually be defined. So to consider situations when these determinants need to be regularized, when one needs to consider regularized determinants. So when these functions are only, when these operators are only Hilbert Schmidt and not trace class. And in fact, in some applications, in particular in consideration of Geneva reprocess, it is important to consider situations also when these operators are not even Hilbert Schmidt when they belong to some higher Chattin class, to some third Chattin class or something like this. Okay, but in any event for this, for the time being, this is just enough for us. And let me formulate some property of, let me formulate some properties of this operator. So I can, it is sometimes convenient to symmetrize. It is sometimes convenient to symmetrize, to symmetrize. So what do I mean by symmetrize? Of course I can write, okay, let me raise this and let me write. Here I can write that this is equal to determinant one plus H minus one. I can symmetrize square root of G K one plus G inverse, K inverse square root of G, so to symmetrize. And then, so let me point out that if K is a projection, is a projection onto a subspace L, onto a subspace L, then K G, which is equal to square root of G K, is a projection onto the subspace, is a projection onto the subspace, the subspace, square root of G L. Let us check this. Let us very quickly check this. So proof. So in fact, if phi belongs, let's say, phi belongs to G L, then, well, first we multiply, so let us, so square root of G K one plus G inverse, K inverse square root of G, of square root of G phi, excuse me, of phi. So phi belongs to square root of G L. This means that phi is equal to square root of G psi, psi in L. So K square root of G psi. So okay, here I have square root of G, square root of G psi, then here I have G psi. G psi. Okay, so I have to take the inverse of this on G psi. You can check that I get psi. Immediately it is clear that in fact, applying this operator to psi gives G psi. So applying the inverse to G psi gives psi. So K psi is equal to psi, and so I get square root of G psi, which is phi. Which is what I wanted to begin with. So conversely, conversely, phi is orthogonal to square root of G L, which means that phi over, phi over square root of G is orthogonal to L. So is equal to, yes, so phi is equal to square root of G psi. So let me write like this. Phi is equal to square root of G psi. Psi orthogonal to L. Okay, so then again, I have square root of G psi. Times G psi. So I get G psi. So again, the inverse, maybe it just, no, now it works. Okay, good. So yes, so now psi just is orthogonal, so this part is the same. Psi is orthogonal to K, so it gives zero. So K J, K G, phi is equal to zero. K G phi is equal to zero. So just again, I think I need to stop quickly. So there is just psi is orthogonal to L, precisely. So K psi is equal to zero. Okay, so K J, phi is equal to zero. Okay, so and just this is proved completely. Let me explain this property. So let me explain this property. Illustrate this on the example of orthogonal polynomial ensembles. So in fact, if we consider, so an orthogonal polynomial ensemble, which we considered, which we saw many times in this school. So let's consider orthogonal polynomial ensemble. So D X, hi. So this of course is a determinatal point process, as we discussed in the very first class, corresponding to the projection on X to the n minus one square root of W, very good. So now if I consider this measure times the multiplicative functional, if I consider this measure times the multiplicative functional, so I write, so this is P. So I write psi G P. This is equal to, well, the same Vandermonde times G of XI W of XI D XI. So this will clearly be again orthogonal polynomial ensemble corresponding to the subspace. Well, clearly, okay. So it is clear that L G, so it is clear that L G is equal to square root of GL. So just, it is clear. Observe however, observe however, that it is much less clear how to write the kernel, how to write the kernel for this new orthogonal polynomial ensemble. Even if L is orthogonal polynomial ensemble, your favorite very classical orthogonal polynomial ensemble, Hermitian polynomials. So it is completely not clear how to write kernel for such ensemble. So let me point out just the convenience of this very simple proposition, is that it allows to work directly with ranges in a situation where it is not possible to write kernels explicitly. And so let me write down a corollary. Let me write down a corollary, just a corollary. So corollary, just, we shall now in terms of this write down a conditional point process under the conditions that the point process contains no particles. So let's consider, so let's consider a subspace B of E and let's consider the event. Let me denote it like this. Configurations E, E in without B is the set of configurations X, such that X has no intersection with B, so X has no particles in B. So my claim is, my claim is that the determinatal point process, the determinatal point process, VK restricted to configurations E, E without B, well, and obviously normalized, is again a determinatal point process where the L, where the kernel is projection, projection onto I, E without B, L. A without B, L. Okay, so let us prove this. In fact, it's one line corollary because in fact, this restriction, so proof occupies really not two lines, but one line, so proof is that star is equal to the product of characteristic function of E without B, B, K. Well, obviously normalized. So let me write Z inverse for normalization. So just, this is just, this is just multiplicative function, so obviously considering the conditional measure with respect to the fact that there are no particles in B corresponds to considering my measure multiplied by multiplicative functional with respect to characteristic function of the complement of B. So, and then applying the previous result, we obtain this property. Let me again underline, let me underline that it is not the same, so there is, in any point process, there is a general procedure of forgetting. You can consider point process on R and you can forget R minus. Just consider it's restriction to R plus. Just forget about R minus. Forgetting just means forgetting on the level of kernels. So take the kernel and forget, so just consider the kernel restricted to R plus. For example, this is not what is happening. So what is there, what is forgetting is forgetting. Here it is not forgetting. Here it is conditioning on the fact that there are no particles. It's not that I forget about particles. It's that I say that there are no particles there. And so then I have to, so to speak, forget, but not on the level of the subspace, on the level of the subspace, on the level of the subspace. So okay, Ken gives me this chair of department look and so, and I just say that tomorrow, we will continue with the behavior of determinant processes under conditioning and we will prove a lemma, which we proved in joint work with Shu and Shamov, that conditioning both on absence of particles and on presence of particles preserves the determinant property. Thank you very much.