 Okay. Yes. So, let us just briefly remind you our setting. There will be no calculation, essentially. So, let us start from a simple case to variable continuous function that defines, so here mu is measure, reference measure, we call reference measure on c. And this function induces operator from L2 space to itself given by this integral, we denote again by kf, integral operator y to mu y. So, by when theorem these operator k, this kernel, we call this kernel, again induces a probability on the space of configurations on c, that is subset of c without accumulation point. So, here is our c and we denote element by x. So, it is just a collection of particles in c, probably there will be infinite one. So, this is our x, this point our, so the collection of all these points are our x. So, this probability we will denote it by pk. And yesterday we write, we wrote two formulas that characterize this pk. In fact, this pk just gives us a way to choose randomly some subsets which is locally finite. So, and also we have already know it is important to have this property to introduce this thing, spa measure of pk and p. So, what is this? This is by definition. So, I will give only intuitive definition. This is just in general pkpl, distinct point in c. This is, so here this is random subsets of c with distribution pk. And per measures at least is the conditional measure, conditional distribution of this random thing. x, but I erase these particles provided we are in the event. We know the event provided the event, the following event happens. This p1pl is in x. Of course, this is the other event, the other probability event. So, it's not direct to say this, but intuitively it is such thing. So, this is exactly what we calculate yesterday. The conditional thing we fixed some, for orthogonal polynomial ensemble we fixed some last p1 to pl and consider the conditional distribution. And we will denote, we denote this conditional measure by pk. So, let me write it p and the line. And we are interested in fact in calculating derivative of such thing. We are interested actually in fact in p and q, but with pq. So, here q is of the same length. Later we will also say something about when pq, the length is different. Okay. In fact, yesterday we also see that the orthogonal polynomial ensemble then this conditional measure to self is given by another orthogonal polynomial ensemble and which is again induces a determinant point process. And this fact is in fact is general. There is a theorem of Shihai Takahashi which says the following. So, we denote. So, okay. So, the first thing is pk, this conditional measure is again a determinant point process and with kernel which is explicitly given, with kernel given by the following. So, for stating the kernel, let us write in several case. When p, the length is just one. Then this is very easy to write it. The kernel is k, we denote by kp1, kxy equals to kxy minus kxp1, kp1, kxy minus over p1p1. In general this is, we can iterate this. In general this kp bar is given by kp1. So, this is a new kernel. Then we do the same procedure for p2 and etc. And actually there is a explicit formula for this which is given by determinant of kxy, k, okay let me write like this. Determinant of p, p, i, p, j. Here is for 0 from i, j. There is no 0 in particular in fact. Set p0 to x, p0 to, this is x and y. Okay. I don't think, this is, okay. I still need to write it probably. So, I am not allowed to be lazy. kp1, so xp1, kxpl. This is kp1y, kp1p1, kp1pl and kply, kplp1, kplpl. And divided by the determinant of this sub-metrics. This is given by determinant of kp, i, p, j for i, j from 1 to l. So, in other words we can write it in a compact way. So, this new determinant point process is just the determinant point process given by this kernel. So, you write it very simple. We removed the, put here to here. But the meaning is here is this power measure and here is a determinant measure given by this explicit kernel. So, there is an important fact, exercise. Assume that k is an orthogonal projection then orthogonal projection of this thing, u, 2, k, l. So, some subspace is some 2. Then kp, this just one point, kp is the orthogonal projection, the kernel of the orthogonal projection onto the subspace given by the following, f in l that vanish at g1. There is in fact a subtlety here because usually a function l2, we cannot say the evaluation at one point. But here since our kernel is continuous or nice kernel there is no, at least for our purpose there is no subtlety here. Okay, now let me state our theorem and explain, give an intuitive explanation how we get such result. So, our setting is the following. We fix a function which is say to a smooth such that the Laplacian is bounded from below and above. I think this is probably not essential but let us focus on this. And consider the following measure on c, where lambda is the big measure on c. And then we define the generalized weighted fox space determined by this lambda phi as those columnar effect function on the whole plane such that it is square integrable with respect to this measure. It turns out this is a closed subspace of l2, c, d, lambda phi. So, the orthogonal projection onto this fox space is caused to generalized Bergman kernel or Bachman kernel that we are denoted by pi phi. This is the kernel, the reproducing kernel space. That means for any f in this columnar and square integrable with respect to measure then f is actually given by this integral equation. So, for example, the simplest example and probably the most important example is for the function that correspond to the Gaussian measure. So, this d lambda phi is the Gaussian measure and in this case if I don't make mistake probably there is one half somewhere is given explicitly by this very simple formula. Questions now. So, we are interested in the determinant point process generated by such kernel. And in particular for this important example the determinant point process p, pi, this is a probability measure on space of calculations of c when in this example it has a name and Geneva point process which is the limit of eigenvalues of Gaussian non-commission matrix. So, we are going to call this generalized point process and we have remember that we would like to compute these things. We would like to understand the derivative of such things and we already have such the kernel is just the orthogonal on some subspace. So, now we are interested to the following. So, this gives our original fox space gives rise to this problem gives rise to this original point process and if we consider the subspace f those functions vanish at point p1, pl then this we have orthogonal projection onto the subspace that has denoted by p underlying and this orthogonal projection gives us the point process and by Shihai Takahashi this measure is exactly the measure that we are interested for which will appear in the calculation of course cycle of the different morphism on such space. So, we do the same thing here we have something here and we get we obtain a kernel that depends on q and we obtain the point process here which is again given by Shihai Takahashi this thing. So, now our purpose is to understand the rather negative derivative between such two point process. So, it is very natural to study first. So, everything in fact if we completely understand the relation between these two subspace probably this will give us an information for these two point process and the reason that we study such kind of point process because there is a very simple relation between these two subspace and this is actually a very simple proposition which can be also a good exercise which says that p equal to d minus p1 pl z minus q1 z minus ql this means so this is a function multiplied by space which means I multiply each function inside the space by this function and put them together as a new as a new subspace. So, this means here if we denote g p q this is just those function f g p q f is in the space q of course this is very simple because the algorithm function that vanish at q1 to ql they can be divided by this polynomial and the only fact that we need to check and this of course this one vanish at this point p1 to pl the only thing we need to check is such things is indeed l2 with respect to the measure. Okay, then it turns out this function is very important and we have the following this function appears in our calculation. So, we can start with a simplest case where this psi is radial so this theorem is so we have in fact in two steps and the version that I will present here is obtained one week or two weeks ago. Okay, the first case when phi is radial which means that phi z equals to phi z modulus in particular and just erase this Gaussian case is in this case. So, then the first thing is the following limit that depends on p q so I will write p p as p1 to pl is q q1 to qk so this limit infinity product of the particles inside our set without accumulation point times this function to modulus to square x minus p i x minus q i for square i l this exists in l1 relation space with respect to this measure this is in the case when p when k equals to l actually this is the most difficult part to show such product which is which will turn out to be a infinite product actually converges. The second thing is we can compute explicitly this the expectation of this limit second delay the expectation of p psi phi q bar so always under this assumption psi p q is given by the following quantity so this one i j less than l q i minus q j square and the same thing so this is a fundamental determinant to power p i minus p j square times determinant of i the p i p j this is from i to l determinant i j from 1 to l the same thing for q and so in particular this means we can calculate explicitly the okay I haven't say in fact and also the relative derivative is indeed given by these quantities very explicit same so this is given by determinant of this q i q j over determinant of p i p j times the inverse of this i minus p j square i less than j i less than j for those q i minus q j square and times this limit psi p q x so the third thing is when l and k are different then actually we cannot compute the relative derivative derivative because they are just a singular these and these are singular they are supported in disjoint subset singular and I would like to say that this result when this phi is just as a Gaussian weight then the result the same result similar result is already obtained is due to osada shihai but with a very different proof this case is due to osada shihai in 2015 so okay this is the radial case how many times 2 12 okay 10 minutes and so probably I will not have time to explain how can we get such result but still let me state the result for radial case because it seems more there is something interesting there so in general general phi so first the following is true the following limit again exists and denotes again by so the the case a will be a particular case of this the limit of exponential of integral and some function that one will define later sum of function p i z minus q i z i from i 1 to l i phi z z d lump phi times the rest part so this is a correction term which in radial case will be 1 1 to l x minus p i x minus q i square so here k p a point k p so for example k p 1 z is given by 2 radial part or real part of the following function p of z I forgot actually this should be some p q which is explicit big one we want to erase the singularity at 0 this is explicit one plus p squared over 2 z squared and of course if lambda the phi is radial this is forget this it's just the integral will indeed be 0 and so every other thing are the things in case a hold erasing hold except we do not have yet with except we haven't yet get the explicit computation of the expectation explicit expectation formula which in fact conjectually is the same thing is just we in some in some compute in some proof there's one step we cannot we haven't yet overcome that difficulty mm-hmm so I have 5 minutes so I will briefly give a okay I forgot one thing here one this is one this I know it's here so they are singular and in particular I can say it probably okay this point process is gosh various rigid so this is this is already again true for the general case this rigid means this point process if we know the configuration outside the compact set then we almost surely know the particle inside this compact set okay so no this is not consequence this is the way how we prove that they are singular and actually the most so an important thing that we need for all this computation is estimate of so it's really from complex analysis of results of Christ's estimate of the kernel which says that we have such thing is given is controlled by some fixed say and the off diagonal thing so it goes exponentially fast to zero this is the one main ingredient okay I still have a few minutes let me briefly give give the intuition of this so you will forget every these limit things we just prove that the rather negative derivative is some constant times such thing and remember this remember that this is just the function between two to kernel so in generally we have the following CRM but which is very technical this is 2015 if we have a L this gives us the projection pi and this gives us as point process and this is not so it can be point process on some abstract face space not just a C and we have a function G for example here okay for simplicity let us take G positive it's not essential G L some function here we get new projection pi let me denote by pi G and we have we have a new point process them and the technical assumption on this pair on such pair assumptions on pi and G so in so roughly speaking the rather negative derivative can be computed and be computed as this a constant then constant normalization constant of G X so but actually this and this is just a roughly speaking this product usually so in general never converges so we indeed need like here this this is kind of some principle value print so we take the product over first the product over some ball then we let the ball goes big and big okay let me stop here