 pl y square root w x w y. So where these are the orthonormal, normal polynomials. Okay, so there is a constant, which we don't care about. And I mentioned, I didn't, again, I left it as an exercise. I mentioned that in fact, all of the correlation functions, that is to say, all of the projections of this measure on all the remaining coordinates also have a determinant form. So they will be determinants. This is determinant from one to n. And all the projections, if I integrate over all variables except five, I will get determinant of size five. Okay, so but today I want to discuss different thing. I want to illustrate on the example of this model a different phenomenon. I want to illustrate the behavior of palm measures. So let us consider conditional measure. Conditional measure, conditional measure by fixing fix x one equals p. And consider the conditional measure. And consider the conditional measure. So, well, the conditional measure is again. So in fact, these are precisely the Christoffel deformations that we saw in talks here in the talk of Pierre Lasag. So the conditional measure, as one can see by direct substitution, will again be orthogonal polynomial ensemble with the weight which is multiplied by p. So let me write xn, xn equals p. So minus one, minus one. So p minus xi square dxi. So, oh, excuse me, w, w. So this is the conditional measure with fixing a particle at p. So now the question arises. Let us consider, let us move this fixed particle. So let's consider, so this is measure. So if this measure is p, then for this measure let me adopt the notation like this. Okay. So, and let me also write the measure by fixing the nth particle, but not at p, but now at q. So this will become very same thing, but this. So now let us compute the ratio. Let's compute the derivative. So the derivative of dpq over d. The configuration x, let me write like this at x where x is x one, xn minus one is precisely the product of p over minus xi. q over minus xi. And well, w of xi just cancels out but there is also the factor. There is the factor. In fact, the factors can be computed. There is the factor, excuse me, let me. Where the w's get canceled for the xi's but there is a factor w of p, w of q. And there is also the factor of normalization. So in fact, normalization, one can see that normalization of this will be by definition the correlation function at p, whereas normalization here will be by definition normalization function at q. So there is also a normalization factor, excuse me. The normalization factor will be here. Knpp and here knqq. Why? Because again in normalizing these measures, so I, excuse me, so just these are two measures on Rn plus one. So these measures are different by multiplication by a function. By which function? By this function. So I could put it like this. pp is equal to psi pq pq, right? So just one measure is obtained. So that is to say that measure of a set with respect to this measure is obtained by integrating this function with respect to this measure. Excuse me, say again. It's just notation. It's derivative. Derivative of one measure with respect to another measure. I'm not sure I understand your question. So derivative of one measure with respect to another measure. Radonnikadim derivative. So just the notation, when I write mu is equal to f mu, it's the same as to write d mu over d mu is equal to f. So it's just, thank you very much. In fact, there are many notational conventions here, but I will just use this one. So this is just Radonnikadim derivative. So one can compute these functions explicitly. So this is just the final formula. So the aim of today's talk is to obtain such formula in scaling limit. And in fact, by the way, this project goes back to work of Olszanski about the gamma kernel. So to the work of Olszanski about the gamma kernel, Olszanski on the gamma kernel. And who obtained such formulas for the gamma kernel. So for a determinant process with the gamma kernel, please allow me not to write it explicitly now. Maybe we will do it later. So for the determinant process with the gamma kernel, in fact, by taking scaling limit in this formula. But let me point out that it is delicate. In fact, there are difficulties in Olszanski's approach. Also for Geniebre ensemble, this approach was later also adopted by Osada and Shirai. So just the difficulties are manifold. And due to the fact that, for example, this function is not bounded, you can see. It's not bounded neither above nor below. So there is a denominator. And in any event, so it is difficult to do limit transition in such formula. Because in fact, limit transition, especially when f is not bounded. If f were bounded, it could be possible. But equivalent measures, when equivalent measures under limit transition can become singular measures. Just think about your favorite sequence of measures with the density converging to a delta mass. Just equivalent, the fact that, so trying to prove such identity by choosing a sequence of approximates mu n, mu n equals fn mu n, creates difficulty because, precisely, it is very possible that such sequence mu n and mu n converge to two measures which are mutually singular. It's very easy to construct examples when this holds and mu n converges to mu and mu n converges to mu. But this is not true. So in fact, the approach that we will adopt is completely different. Approach direct and not using approximation by orthogonal polynomials. Orthogonal polynomials will appear at the end of the day. OK, so before I start, yes? Yes, it is completely relevant also in a discrete orthogonal polynomial case. This is also, the statement is also true. As you say, it is essentially obvious in discrete orthogonal polynomial case because you compare just measures on finite sets, on countable sets, measures on countable sets. But let me point out that when you take the limit transition, when you take limit, then you obtain measures on spaces of infinite binary sequences. For example, the discrete sine process. And such statement, we shall see for discrete sine process. And it is not immediate. So that for discrete sine process, this is not immediate. So let me also say, and this I will address in detail in this talk, that this symbol. So here I wrote conditional measure with respect to part of fix xn equals p and consider conditional measure. But for infinite point process, this is delicate. So what does this mean? This is so-called palm measure. And in fact, I will give detailed definition today. But before I start in this, I have to keep a promise from my first class. And so we shall pursue a digression. So digression from last class promised to prove that the probability of the number of particles being high in a determinational point process decays as e to the minus alpha k square. So this is just a digression. And I pursue it because I can illustrate the concept. I can illustrate the idea precisely very conveniently on the example of orthogonal polynomial ensembles. In fact, if I have an interval of length epsilon, interval i of length epsilon, interval i of length epsilon, then the probability it is clear that the probability is that there are, with this orthogonal polynomial ensemble, that there are k particles in i. Well, let us look at the Van der Monde determinant. This probability is, in fact, at most epsilon to the k square with some constants depending on other parameters. So there we go. So just by looking at the Van der Monde determinant, I get this k square. So my aim now in pursuing a little digression and fulfilling the promise from first class is to prove this statement for general determinational processes and just illustrating the fact that just somehow orthogonal polynomial ensembles are a very convenient model to check, to verify statements about determinational point processes that sometimes often can be proved in great generality. So first manifestation of this we saw in last class when we saw that multiplying by a weight, multiplying by a multiplicative functional, taking product with a multiplicative functional leads to multiplying the range of the point process. This statement, very easy to verify for orthogonal polynomial ensembles, holds in great generality. So let us prove this. So I need some assumption for this. The assumption will be the following. The assumption is that the maximum, so let me only prove this in the case. So to simplify the exposition, let me assume that the length of i is less than 1. The statement holds in general because of negative correlations. But let me just assume this. So assume this. So and let me also assume that they'll assume the following. So OK, I need one more assumption. Let me write the assumptions here. So OK, assume that 1. And second, that the maximum of the case derivative of the kernel. No, yes, that's right, yes. No, case just some fixed number. No, this is excellent question. No, case fixed number. Case fixed number. Case fixed number. And n is very large. Yes, exactly. But in fact, the proof I will give will again not be by limit transition. It will be direct proof from correlation functions. So precisely, this somehow illustrates the convenience of working with correlation functions. OK, so I will assume that the case derivative of the kernel, so the maximum, is Ck factorial. This is convenient assumption which holds in all examples. You can think of, in fact, it's also it wouldn't hurt if I put C to the k or something like this. So I can put C to the k. But let me write like this just for simplicity. But if I put, for example, C to the k, it doesn't matter either. On the interval, on the interval, thank you very much. On the interval, on the interval, thank you very much. On I, yes, thank you very much. On the interval, thank you very much, yes. OK, so with these assumptions I aim to prove this, this is very simple proof. OK, so in fact, this one is equal to 1 over k factorial. So excuse me, excuse me, I misspoke. So it's less or equal to, because it's the expectation, 1 over k factorial integral of rho k x1 xk dx1 dxk. This is just definition of correlation function, if you wish. This is just definition of correlation function. And this is, in fact, 1 over k factorial. So let me continue here. 1 over k factorial integral over i to the k of determinant i xi xj. OK, so then, then, i to k. OK, so then, let us analyze this determinant. And for this, it will be convenient for us to use the formalism of divided differences. Divided differences. Let us recall that the divided difference, given some function f, f divided difference, is defined inductively. So the first divided difference. The first divided difference is what you think it is. And the nth divided difference is plus 1 over x1 minus xn plus 1. So these are the divided differences. OK, so just and, in fact, a divided difference of a function is equal to nth derivative. This is just Lagrange's formula. nth derivative, where xi is in i, is in the interval i. Yes? OK, OK, OK. OK, yes, exactly. I always forget about the blind spot. Yes, excuse me, yes. So this one is OK? The last one is also not OK. OK, OK, OK. I always put the most important formula in the blind spot. OK, so let me put it here. f, I think it's better to write like this. xn x1 xn plus 1 is f x1 xn minus f x2 xn plus 1 over x1 minus xn plus 1. So OK, so and then this divided difference is, in fact, equal to nth derivative of xi over n factorial. Yes, so xi is in i. So then, well, let me raise the thing with correlation function. So let me just try that this is less than this. And I continue by taking the divided differences. So I continue, I take the divided differences. So I obtain determinant, so excuse me. So the determinant, determinant pxi xj is equal precisely to the van der moond times the determinant of divided differences. Determinant of divided differences, x1 and so on, xk. So what do I mean by this? So in the first column, I take, so the first column just doesn't change, I take px1 x1 px1 x2 and so on, px1 xn. So in the second column, I take the first divided difference. The divided difference is always taken in the first variable. So yes, so I take the divided difference. So here I have px1, so x1, x2. Yes, so excuse me, x2, x1, x2, and so on, pxn, x1, x2, and so on. So here I will get p, so obviously depending on the xl, so x1, xn. So but then this determinant, I just evaluate as the sum of the terms. So there are n factorial terms, and each term, so this one I evaluate just as the sum of the terms, n factorial terms, and each term is product of the case derivative, which we wrote here. So product of maximum of the case derivative over k factorial. Times, times, excuse me, times obviously the van der moond. So the whole point of taking the divided differences was to get out the van der moond, times the van der moond. So and the van der moond gives me i, the desired i to the power k over k minus 1 over 2. So n factorial means, of course, k factorial. Yes, so and the product is an l from l2. Yes, so there we go. Again, excuse me, f is the kernel pi. f is precisely the kernel pi. f is precisely the kernel pi. So the kernel pi, I take the divided differences of the kernel pi in corresponding points. So here, yes, divided differences of the kernel pi. Precisely, as I said, on the first. So pi is a function of two variables. That is precisely right. Pi is a function of two variables. One of them is fixed. So in the first, that's exactly right. So in the first row, it is the first variable, in the second row, it is the second variable, and so on. In the nth row, it is the nth variable. The second, this is precisely right. Let's, let me say this in greater detail. So in each, in each row of the, of the matrix, I have the following expression. So kth, kth row. So let's say lth row. So I have pi xlx1, pi xlx2, and so on, pi xlxn. So I consider this as function in the variable x1, x2, and so on, xn. L's function, so this is a parameter. I consider it as function in variable x1, xn. So this is the variable. So, and I take these divided differences. Yes, fl of x is equal to pi xl of x. Yes, f depends on row. So in L's row, I take divided difference of fl. Does it make sense? Well, of course there, of course in different rows, I have different f, Igor. I'm not sure I understand your question. Well, it is function of, it is function of two variables. One variable is considered as parameters, and the other variable, the other variable. So wait, let's consider, let's take example two times two if this point creates questions. Just one second, let's, let's consider. So what I, let's consider determinant two times two. Do you, Igor, agree with this? Yes or no? There is no definition. It's called subtracting, subtracting columns in a matrix. Do you agree? So this exactly what is written iteratively. So then, then I write this. So for matrix, please allow me not to write this for three times three and so on. So just I write this for, first I subtract the first column from every column. So I get divided differences of order one and I get products, and I get all these products x2 minus x1, xc minus xc and so on. Then I take divided differences of higher, obviously with the remaining columns, second one, third one and so on. I take divided differences of second order and I develop next, okay, so okay, okay. So any, I think, I hope I managed to convince Igor so does it mean that I have convinced everybody else also? If not, please ask me. This is just maybe, I don't know. I must be confusing you in something very simple because it's very, very, very immediate computation. Does it make sense? Or, sorry, sorry, excuse me, I don't, I don't. Yes? Terminators, excuse me, it's L, the determinators of size K and I'm varying L, this one. It's K, it's K, in fact, excuse me. It's K, yes? I'm varying, I'm varying, so on each row I'm varying this variable which serves as parameter and I'm subtracting in the other variable. Yes, yes, okay, okay, good, perfect. Okay, so we're in good, I hope, we're in good shape, so yes, and then, well, directly we have gotten the Vandermonde here, here it is. Okay, let me subtract, this was auxiliary formula. Yes, so we get Vandermonde here and, okay, let me erase this also, and then, so the Vandermonde comes and the Vandermonde gives this thing. Okay, and so we have obtained the desired estimate. I have assumed that the length of I is less than one, but if the length of I is bigger than one, then one just needs to subdivide it into pieces and observe that there is negative correlations. So I should say that this, please, the experts in the audience correct me. I should say that I don't know which source to quote for this immediate computation. This is easy computation, but I've never seen it done. Please correct me if I'm missing some obvious source and also what is more serious for me, I don't know this constant alpha. So as you can see here, there is a lot of, so this argument is very naive. In particular, I get out only the first, please observe that here I get only the first power of the Vandermonde. So I don't get, in fact, one would expect, so one should expect there to be a square. So this estimate is very crude, but at the same time, so please let me allow me to ask this question. So what is alpha? So I have absolutely no idea. So question, what is alpha? Let's put some constant, let's put some constant, let's put some. Yes, but this you can incorporate into alpha, but this you can incorporate into alpha. K factorial is K to the K, so this is slower than e to the minus alpha k square. Exactly, exactly, yeah, there might be some. Yes, yes, there are, yes. So you can, yes. So if you put K factorial, this just incorporates into alpha. Thank you very much, yes. But so I point out that the probability is less than one anyway, so choosing alpha sufficiently small I can erase this constant. Okay, but you can put a constant in there. Okay, so the point is what is alpha? And this I don't know. What is alpha, what is alpha, this I don't know. How does alpha depend on i? By the way, let me point out that in this argument, one sees that if these estimates are uniform, it only depends on length of i. But how, because I only use length of i here, but how, I don't know. Okay, so closing this digression, so this was a digression about just probability of high accumulation. Oh, by the way, I should say there is another question. So the argument given here directly applies to one real variable and directly applies to one complex variable. But it seems to me that it should also be possible to apply it also in several dimensions, but well, I have not done this. So maybe it's possible to also to have such argument for just not just one complex variable, but also for general process in several dimensions by interpreting divided differences in some good way. Okay, okay, so this much for this, and we proceed with the palm measures. So in fact, I think I erase everything and I start with, I start a new chapter, and it is a beautiful, beautiful chapter, beautiful theory. In fact, with beautiful history, which I like to tell very much. So as I think I mentioned many times, around the turn of the century, the phone companies had a significant problem. Phone companies had a significant problem in that the phone lines were growing out of control. So phone communication was experiencing a rapid boom and just people had to wait in line for very long to place a call. In fact, if you read the literature of the turn of the century, the difficulty of placing a call is prominently mentioned of the beginning of the 20th century, yes. So maybe the analogous difficulty of the 21st is the difficulty of connecting to Wi-Fi or something like this. So just in any event, so the phone lines were growing out of control and so the phone companies were trying to resolve this situation. And in fact, so this led a young engineer in Stockholm working for Ericsson, Konrad Palm, to come up with the concept of the palm distribution. So precisely this is the name of this subsection, palm theory. So one should say that of course, Palm did not write. So Palm's manuscript was written, of course, not even on physical engineering level of rigor. And in fact, the mathematical palm theory was constructed by Alexander Hinchin. So and sometimes one says Palm Hinchin theory also. But in any event, so the idea of Palm was the following that, the idea of Palm was the following that one should consider in a point process. So Palm was obviously just considering the incoming sequence of calls. One should consider the probability distribution of next call arriving provided that the call has just been placed. So the distribution, or in other words, distribution of let's say next particle provided that there is a particle in this position. So this is the whole idea of Palm. So precisely this consideration immediately resolves the well-known waiting time paradox. The waiting time paradox is that if buses departed random times and you leave the ICTP every time at three, then buses have Poisson distribution just with expectation. 10 minutes, then you leave the ICTP at three o'clock every day. But in fact, your waiting time will be, your average waiting time will be 10 minutes. And average time between the bus that just left and you will also be 10 minutes. And it seems a paradox because also the average time between buses is 10 minutes. But precisely the resolution of the paradox, so it seems like 10 is equal to 10 plus 10, but the resolution of the paradox precisely lies in the fact that the expectations are taken into different ways. One is with respect to a measure and one with respect to palm measure. With respect to a measure, one with respect to palm measure. So in fact, or informally the informal explanation is that of course when it is more probable to hit a bigger interval between buses. So that is the informal explanation. However it may be the palm, so this is the palm distribution. So I fix a particle in some position and I am looking at the distribution of the next particle. I'm looking at the distribution of the next particle or of some other event. So and naturally one asks the question, what does it mean to consider conditional measure of a point process, conditional measure of a point process with respect to a particle fixed at a given position? So this question is not obvious for the simple reason that how to say the event there is a particle in this position is not an event with respect to which you can take conditional expectation. The one cannot take conditional expectation with respect to such event. So there is no sigma algebra because there are many events occupying many positions so it is not possible. So the statement conditional expectation with respect to a particle at this position has absolutely no rigorous meaning. It's rigorously completely meaningless. So it has to be endowed with meaning. So there are two ways. So there is a naive way of Hinchin himself which is just take a neighborhood of this particle and so some interval i epsilon of length epsilon condition on the event condition. So Hinchin, Hinchin, Hinchin's elementary approach, Hinchin elementary approach is that condition on the event that there is a particle in this small interval and then take epsilon to zero, take epsilon to zero. So this is the elementary approach of Hinchin. So it has many advantages but it has two important disadvantages. It's very inconvenient to work with this because of this differentiation limit and the second disadvantage is that it is a priori not clear why the limit exists, why the limit exists. Why the limit exists. So in fact, this is not what we will do. This is not what we will do. We will use an approach with through Campbell measures which is due to Olaf Kallenberg. So Kallenberg's formalism through Campbell measures. So we consider the following objects. So Campbell measure. So let me explain the idea of the Campbell measure. The difficulty in conditioning for the palm measure, the difficulty in conditioning for the palm measure is that the particles are not numbered. So when I say there is a particle in this position, if the particles were numbered, I would be able to say the first particle is in this position because the first particle cannot be simultaneously in this position and in that position. So it is possible to have a sigma algebra and to have conditional expectation. But when I say a particle in this position, maybe it's first particle, maybe it's second particle, but these events are not exclusive and so precisely it is not possible to consider it. So Campbell measure is an extension of the point process in which one of the particles is marked. So we consider not just configuration but there is configuration. Configuration is a collection of particles without regard to order. The very idea is that we consider not sequences but configurations, we consider measures not a space of sequences but on spaces of configuration. It's a very important difference. If in finite sequence it does make a difference, ordered or unordered, there is just n factorial in the formula. So in infinite sequence, the difference is very important. So it is not possible to order naturally particles in the unit disk for example. So in fact, Campbell measure is a little bit of ordering. There are particles, they're all indistinguishable but one of them is red. Okay, I proceed to precise definition. So the Campbell measure C is a measure, a measure on E times space of configurations on E. So one should think about it, I will draw it like this. So this is space of configurations on E. This is E. So and one should somehow think that over each configuration, so this is a configuration, over each configuration I place itself as a measure on E and the resulting measure is precisely the Campbell measure. To put it precisely, so C of B times Z, so B is a subset in E and Z is a subset in the space of configurations is just the integral of the occupation, occupation value of X on B over Z. So this is the precise formula that gives this, that gives this Campbell measure and now the Campbell measure, it's an infinite measure but in restriction to any bounded set it is finite. In restriction to any bounded set it's finite. And so I can take conditional measures of the Campbell measure. Conditional measure, so now conditional measure, conditional measure on C, so let me write C, P. Now I can fix a particle, conditional measure of C, P at Q in E is precisely the palm measure. It's precisely the palm measure. It's precisely the palm measure. Okay, so it's precisely the palm measure. So now, it's precisely the palm measure. So now the question is, it is convenient, so this measure as it is defined by definition it has a particle at Q by definition. So we consider the set of those which have a particle at Q. So it is convenient to remove this particle. Convenient to remove the particle. The palm measure by definition has a particle this palm measure, this measure by definition has a particle at Q, by definition has a particle at Q. So it's supported on configurations having a particle at Q configurations X, X such that Q belongs to X. So it is convenient to remove it. So for example, why is it convenient to remove it? I will give very simple illustration. In fact, just already for the determinant measures I will prove that the palm measure of determinant measures is determinant measure. But if I have this particle at Q, I would have to say it's determinant measure plus a particle at Q. So just all the statements would have the form something, something, something, something, something plus a particle at Q. So it's convenient to kick out this particle at Q. So a call, so to speak, a call takes place now but it has already taken place, I forget about it. So what is happening with the phone line? Forgetting the call just took place? Yes, forget about it. So this is the message. And so this is called reduced palm measure. So removing the particle at Q gives the reduced palm measure. Gives the reduced palm measure, used palm, which I denote PQ. And to simplify the exposition even further, let me note the clear proposition. Proposition. So the correlation function, the correlation function of PQ, else correlation function at X1, Excel. So audience participation question, but those who know already don't answer. So what is it, how is it expressed through the correlation functions of the initial point process? You have the else correlation function, meaning that there are particles in positions X1, Excel. So, and also I have this condition that there is a particle at Q. How is it related to correlation functions of the initial process? If you already know it, no. If you know it, if you don't know it, you can try, yes. Seven. No. Okay. So yes, now you can make serious attempt. So what do you think it is? Yes. Precisely, yes. Well, exactly, thank you very much. Yes, so it is precise. You don't need to average or anything. It's just infinitesimal probabilities that there are all these particles and also Q. This is exactly right. So, and from this proposition, there is a corollary, which is the Shirai Takahashi theorem. So the palm measure, the palm measure of a determinant point process, measure of a determinant point process, a determinant point process is again determinant, is determinant, and the formula is, so let me raise this, and Q. So Q, Q. So, again, excuse me. Ah, yes, of course, you're absolutely right. Absolutely right, absolutely right, absolutely right. This is absolutely right. So where should I put row? I should put row one of Q somewhere. You're absolutely right. Over row one of Q. You're absolutely right. Thank you very much. Yes, this was a mistake. Yes, very much. Yes, in fact, thank you very much. Yes. Okay, yes. Very true. Is determinant. So determinant point process, let me write Pk is determinant. Pk Q is equal to Pk Q. So again, the palm measure, by palm measure, I always mean reduced palm measure. So let me write it just once, and I will not write this again, reduced palm measure, and with this formula. So now we are ready to just at least see how one proves the, so the Gibbs property, so Gibbs property for determinant point process, Gibbs property for DPP, for determinant point process. So let us recall, let us recall that we have imposed, we have considered processes satisfying the division axiom. So I consider projection, so K projection onto subspace L, satisfying the weak division property. So if F is in L and F at Q is equal to zero, then F over X minus Q is in L. Why do I care so much about functions assuming value zero at zero? In fact, because Kp, so if K projects, if K projects onto L, onto L, then Kq projects onto Lq being the precisely the set of functions assuming value zero at Q. Assuming value zero at Q. In fact, precisely this is the reproducing kernel at Q. So this is the functional, this is the functional of taking the value at Q. So we subtract from my projection operator the one rank one projection which projects onto the value at Q. So I project on smaller subspace of functions which assume value zero at Q. But now if one looks at this formula attentively, so this is the weak division axiom, let me, weak division axiom, weak division axiom. If one looks at this formula attentively, then one sees that weak division axiom implies that L of Q is equal to X minus Q, X minus P L of P. In fact, if you think about it, it's very natural. Think again about spaces of polynomials. So we all know that what does it mean? Polynomial assumes value zero at Q. It means that polynomial is divisible by X minus Q. So what does it mean that polynomial is assumes value zero at P? It means that polynomial is divisible by X minus P. So precisely L of Q is obtained by X minus Q, by multiplying by X minus Q divided by X minus P and taking L P. Why I can write it like this, also L of Q over X minus Q is equal to L of P over X minus P. So this is just direct corollary of this. Okay, but this implies in turn by the proposition that we proved last time that if the ranges are different by if the ranges are different by multiplication by a function, then the corresponding determinant point processes are different by multiplication by multiplicative functional. So the palm measure of this point process, the palm measures are equivalent and they are in fact different by multiplication by a multiplicative functional. And from this in turn, once I have equivalence of palm measures, once I have equivalence of palm measures, so the corollary precisely is the corollary is that the conditional measure, so the Gibbs property, which I formulated in the first class. So the corollary of this precisely is that the conditional measure if I have some particles inside and with respect to the fixed particles in the exterior, the conditional measure is precisely orthogonal polynomial ensemble. So there is some weight. So there is, of course, normalization constant. There is some weight, some special weight function, and which does not depend on anything. And there is, of course, the product of these factors just in the same way as I wrote in the very beginning in the very beginning of today's class. So just the same formula, the same formula for conditional measure. So fixing the particles in the complement of an interval and considering free particles to be free in the interval itself, I obtain conditional measure as orthogonal polynomial ensemble. This is a weight function, which does not depend on anything. So it's weight function, which only depends on the point process itself. And then there is this product, which has to be understood in principle value. There is this product of interactions between the particles inside and the particles outside. And, of course, there is the Vendremont determinant, which explains that this is an orthogonal polynomial ensemble, and this is essentially a direct measure theoretic. So just Fubini's theorem, if you wish. This is a direct corollary of the equivalence of palm measures by multiplicative function. So this multiplicative function manifests itself here. So by equivalence of palm measure by multiplicative functions, which in turn is a direct corollary of comparability of the palm spaces by multiplication by function, which in turn is a direct corollary of the weak division axiom, which now implies the Gibbs property. Thank you very much.