 So the test lecture will be very elementary and I'm sorry for those who knows already very much determinant point clause. So let us recall the definition of determinant point process. So we are going to restrict ourselves with good kernel. So we only concede those continuous ones. So we first recall the definition of the space of configuration space or for the complex plane. This is just a locally finite subsets. The collection of locally finite subsets of C. And suppose now we fix a measure, mu is a measure, measure which can be a big measure or Gaussian measure or some generalization. This is just fixed measure. And we consider two variable functions. Continuous function such that it induces a kernel of an operator such that this operator that sends a function F to a function that we denote by KF. We consider these as a matrix. We place the sum by integral. So this operator is just the usual continuous version of the matrix action on a vector. So we suppose that this operator is positive operator. So in matrix case, this is a positive matrix. It's not the coefficient is positive. It's the matrix itself is positive. This is a positive operator. By the way, we call it, this means that KF, F is positive. This is the L2 scale of product and contractive. And contractive means that KF, L2 is less than. Then in this assumption, this K determines a uniquely. This is by Maki Soshiniko for Shikai Takahashi theorem. This K determines a unique probability measure on this space of configurations. That satisfies the following two things. So we have, we denote this by probability measure, P by this on this space such that this measure has the following property. We take the sum X, we will denote configuration. So by X, this X is just a locally final subset of our C so here phi is just a certain nice function like continuous compact supported by M. Determinant A, Y, I, Y, J. This is N times M matrix, mu Y1 D mu YM. So actually we will later use another equivalent characterization of this or equivalently. This is some random sum. We can compute the expectation of the random sum. In fact, in some situation it is more convenient to compute the expectations of some random product which is in this case called Laplace transform of, so here we have a subset and we have a function G and we, so this is G, so G minus one is compact supported. So we just multiply all the values on the configuration of this G. So this is a random product. We have determinant G minus one K. So here of course, this is the Freight-Hund determinant. We will not go deep into this but it's just a generalization, continuous generalization of a matrix determinant. So suppose we're in finite space then, this is indeed the usual determinant. We will probably use this for tomorrow. Of course, there are a lot of motivations for studying a determinant point process but that is not our purpose. Now today I will only focus on the calculation of very simple finite dimension case and that will give us some idea why we should consider such and such concepts and such and such quantities. So we are, we now let us focus on the following example which is called, which is the very first examples of determinant point process orthogonal polynomial ensemble. So this is the following. So suppose we are giving a positive function, say continuous such that Z to K through Z D lambda Z is finite for any K. Then we consider the following probability. Consider now the following probability, Z n. So this is some normalization constant and the density is in fact proportional to the square of some Vandermonde determinant. That is why the determinant finally comes to the scenario ij and n. Consider a probability measure given by the following. So of course this is just these conditions to make this finite measure so that we can normalize it to be a determinant and probability measure. And it is a theorem or proposition we are not going to prove which requires some effort but it's only elementary facts from determinant, elementary determinant. So the fact is this, if we denote this by p rho, p rho m so we have two parameters, the weight, rho and m. So we denote this p rho m then we forget. So this is the set of n tuples, order the tuples and we forget the order. So if we forget the order, this means we just, if we have a tuple, we forget the order. So this becomes a set because here these points are almost surely distinguished. So this push forward so rho m and we will denote here by, so I mean denoted by this. This probability is a determinant of point process with kernel explicitly given by k. And z w, p k, z p k, w. So there's a famous name of this. This p k r is case of the normal polynomial also normal polynomial of the weight. That means we have one z to z m, et cetera and all these things are in the space l2 c d mu rho z d z and we do Grunge-Mitt. Then we obtain Grunge-Mitt and we obtain these also normal polynomials. Yes, okay, that is the big measure. n is given, yes, we only consider fixed n. So we will just, so this is, we admit this proposition and if people are interested, I invited to read the papers or books, so this is really the very beginning examples. Okay, now we should look at this kernel. What does this kernel mean? This kernel in fact means exactly the orthogonal projection. The kernel of the orthogonal projection from l2 c z z d z to the subspace. Subspace generated by the first one z d z m minus one this is a subspace, linear subspace of this. Actually, you need to see that k m act on p k equals p k and k m for z n equals to zero equal n is greater than m. Okay, so now we are going to study at the first glance, it's not completely clear why we should care about some concepts. But let me first define it and we compute something and so that we will understand why we actually are interested in these things. Okay, so I will study the following things. Okay, actually I will not define it abstractly. I will just look at this. So look at this measure, the following things. The conditional measure on like for example here z m equals to a fixed point and a fixed point p. So this measure, the conditional measure on these of p n root. So of course, this is zero, so this set, this that means that z equals to z m but the last coordinate equal to p. This of course is a minor zero set but still we can consider the conditional measure of such things. It's just in the formula we take n to be the fixed point, this fixed point p. So it is just, but probably we will change the constant such that it becomes a probability. So let me write it as rho p j. j, so for those ij less than m minus one. Okay, I actually wanted to be more general. I want this z l plus one to be p one, z m be, I don't know how one to z one, this n is p l, this should be m minus l plus one. Only set of rho m, so I just fixed those points after times those point z i minus p a here is for z i minus p a here is for i from one to m minus l and for k from one to l. And we have a constant that depends on the interaction between this point p one p l but that will be inside this constant. Lambda z j, in fact it's called zj d lambda z j j from one to m minus l. This is, we will denote this by this p or p is p one p l and rho. So this is just taking those points which is fixed and then we normalize to be the probability measure on this space minus l. So we will, so of course if this is the same situation and here we can put them together, we can put them together to be just j and we change the, this is j from m minus l, some product of k from one to l, zj minus p k zj d lambda zj. So this of course is again an alzacloprenomial ensemble for changing this n to m minus l and changing this weight rho to be this new weight. In particular we have this. If I now take p to be another, you have the same length and q here, q one to q l, then this ratio, the Radom-Negardim derivative is given by here rho p z rho q and here we'll disappear. Here is a from one to l. j from one to m minus l, zj minus p k over zj minus q k square and times the j minus p k square for j from one to m minus l. Okay, I'm actually used inside this. The formula should j from one to m minus l and here inside is k from one to l, j minus p k zj minus q k square. This is the ratio. Okay, so now we are going to study a small dynamics and we see this ratio, this Radom-Negardim derivative plays an important role in computing the core cycle. Up to now there's no difficulty. This is just a lot of writing. There's no computation at all, in fact. So we can see the following. So we fixed f that here is our c, diffeomorphism, f is a diffeomorphism of c such that it moves only some points in a fixed compact that changes only in a compact subset of c. So this f is identity outside of compact and probably inside there will be some change. Then this c, of course, gives us, for notation, I will actually use this f minus one. It's just for, it's more convenient to do. So I will do diagonal map, z1, zm maps to f minus one, z1, f minus one, zm. So here, originally, we have defined on cm a probability measure, we denote p is the one. Then this f minus one, the push forward over this map, this probability is given by, that means the subset of this, the measure for this is given by the pre-image, the measure of the pre-image under this original measure. So it's very easy to see what is this measure. What is this measure? So this measure is just the following. Okay, I have not enough time, but since we have a density and this is a different morphism, this is really a classical factor of computing the push forward measure. We can compute explicitly the density. The density is given by the following. So p and rho f correspond to what is given by a following. One of some, yes, this is again rho minus j. So yes, probably let me try and write rho that's making confusing here. And times i from one to m, rho f and times i from n, m, the Jacobian determinant of Jacobian here d lambda y1 d lambda ym. So the important thing is the cross-cycle. The cross-cycle exactly means the rather negative derivative, rho of this. At the point, so we can change this to z at the point z1 to zm is given by, is given by product on j minus n f yi j, yi, actually this is z, I only use d i j square d i j square times rho f y zi, rho zi and times i from one to m j i. So there's still not, this hasn't yet appeared. Now we haven't used that. These f minus only changes a final point inside a compact set. So suppose now this support that has denoted by z, suppose this v intersection of the, because this probability is invariant does not change after changing the coordinates. So let us suppose without loss of generality that this is equals to sub set. So inside we have, so this i z1 to zl is inside and zl plus one to zn is outside. Then of course for those zi, for i greater than l, these two things are the same and these two things are the same and the Jacobian is also one. So we should, we can simplify this to the following. This is also a very simple computation but which will give us ideas by some i j square. This is for l i zj and times, times those, so for i j both greater than l just disappear. But for i is in l but for j greater than l there are still some terms. Here is for j from l plus one m and for i from l one to l, we have some i zj plus one plus one plus one m. Some terms given by f di minus zj, i minus zj square and times that matters is only for one to l and for l plus one this Jacobian equals to one. Jf, okay. So now let us compare these two things and let us write a proposition which works. So we only compute this for this finite outside of the polynomial sample pattern behind this simple computation, it's actually there's a general result for this but let us just give a simple version of it. So we have the following, c p m low f. So this at the point z one is given by the following. So this will remain, this will remain. I will compare this to that, okay. So that is how to write this is j from one to l plus. This is j from, actually this is not l plus one. i to one to l and j to l minus one. Actually this is almost the same thing as the m. So for q is that, if I don't make mistake add this here or there. This is for the l plus one to, now this is for one to the, I think this should be f z one to f z l and this is low one to z l. And times that will be determinant over derivative of this p will be f z one minus one. z one l, this is the point that will be changed. This adds the point, z l plus one to z m and times tensor rest. This let us write, we'll simplify this as, we write this as a fundamental determinant of z one to z and actually this is, okay, it's here, it's here. We write it d one to z l square and here we do the same thing, we get d f z one f z l square z one z l square and times j f z i from i to l. Actually, and I will have still have five minutes. I have 10, okay, that's great. I haven't used that this is a determinant point process. Here's a formula, we actually, we can compute this. We are able to compute this, okay, let me just first write it down, probably I made mistakes. I'm sorry for that. This is in fact given by if I don't make a mistake. Actually here is, okay. This is actually given by something like, we remember that we have the kernel that is the sum of our sub-normal polynomials. Usnormal polynomials, we have Km, zj, this is for l times n, ij from one to l divided by determinant i zj, l times m and actually in fact it's these times, these gives us this, z one to the times j f z i from i to l. Okay, before indeed the proving this last step, we haven't yet proved this because we need to compute this normalization constant which needs some result from our correlation functions. But let us look at this formula. So this final formula is the cosine call is given by such thing. But the important thing is here, l, so this matrix is fixed, l times m and we can let n goes to infinity for starting, so this n actually is finally becomes the number of particles of our determinant point process. So when n goes to infinity, this means we are starting infinite point process. But here we just have final set, if we have Kn goes to K, then we just, this limit will be just coefficient limit and we erase n uses n directly and here it will tends to our infinite point process. And here becomes actually our, so we will actually have the following thing. We'll probably write it easier or then to speak. X, this is the point in our point process. So we have, actually this is, we have this determinant of Kfzj l times l, determinant of Kzizj l times l and here is tpkfz1fzl, so this is only finite point, so we can actually compute z1 to zl and here is the point, the configuration but we forget those z1, zl. So we only need to do this in discrete case, in continue case we can forget this thing. And I from one to l, j, I, one, so this formula holds when x intersection the support of the support of the diffeil morphism, I mean the compact set that inside the, I have changes is z1, zl. But of course this is finite set because we are starting configuration, so configuration is locally finite set, so the intersection with any compact set is just finite. Why do I still have four, five minutes? Okay. Okay, I will try to leave this as an exercise and give a hint, okay, actually I will give a formula which is not easy but I prefer you to compute it using this formula, we have the following and tomorrow since we get this formula, we will only focus on such the computation of such thing. Probably I forgot, oh no, okay, I didn't forgot anything. The proposition is the following, so we have, remember that we have this sum of pk, sum of pk, zpkw, this is Christoph Dachburg kernel, then actually we have the following identity, ij as l, so this, when people know this is l's correlation function is we have such formula integration of cm minus l of, so this is determined in the form of rho, one less than j less than m, zi minus zj square, rho, zi, d-land, so we only integrate, integrate out those from l plus one to m, so we integrate plus one times rho, zm, lambda, zl plus one and l, the m, and so this factor, exercise, exercise is the following, dm, so remember, recall what is dm, this is the normalization constant for the following, I will wait, so this to p1 to pn, l, q1 to ql, this equals to determinant, that determinant p1k, p1i, pj from zl determinant aqi, the recall, in recalling that this constant is given by the following thing, recall that, so this zn is given by actually, so we have this, we have this, we have this, we have this, so we have this thing, one equals to what? In fact, this is not the right notation in mine, this should be cm minus, we are integrating out what? This m minus l, one over zm rho over p1, this is a, we only need that we define the same probability, this conditional measure is a probability, so we only use such a restriction m minus l, even don't make a mistake here, for those k from this, j from 1 to m minus l, and k from 1 to l, zk minus q, this is pj square rho, zk minus rho, d lambda zk, so in fact, this looks very similar to this, so we put this here, we actually, we are able to compute this constant, represent this constant in using determinant and also some part coming from the, fundamental determinant concerning q1 to qi. Okay, I will stop here.