 Thank you very much. Okay, so last time we finished with the formula with the Analog of the Gibbs property for the sign process. So let us write again. So we consider is a conditional measure the conditional measure at this moment for the sign process, but in fact Precisely what I want to discuss today is the degree of generality of this result is the That's the conditional measure of the sign process by fixing the configuration in the complement Of an interval So fixing the configuration in the complement of an interval. So in fact in Pavel's talk yesterday, we Saw a detailed explanation and so I don't need to give it here a detailed explanation of why the number of particles in the interval is fixed So of the rigidity of Goshen Peres. So we have particles x fixed particles in the environment in the outside So x fixed So we fixed this is the part of configuration of x in restriction to r without i This is the interval i and so the conditional measure is obtained by fixing these and by allowing these to move their total number is fixed as Pavel explained to us Yesterday and so the conditional measure is the following also so Yesterday in At least we saw some elements of the argument and I will explain The remainder of the argument Okay, so this is the conditional measure and the infinite product is understood in principle failure So and I explained Yesterday, I think in some detail that the The fact on which this Statement is based is the relationship between palm measures So the relationship between palm measures of the sign process which I wrote like this is equal to product of T minus Excuse me product of x minus p x minus q Square x and x product is understood in principle value Which in turn was derived from the fact that the palm subspaces That the palm subspaces are Related so L so L the range of the sine kernel let us recall that the range of the sine kernel is just the Paley Wiener space so the space of functions whose forest Fourier transform is supported on minus pi pi and so The we consider the space PW of P being this space of functions phi in Paley Wiener Such that phi of P is equal to zero and then we have the relation PW of P is equal to X minus p x minus q PW of q Okay, so and then Just yesterday. I just said that from this one obtains this but in fact this step Requires a certain Effort and I will say very briefly. I will just say very briefly The effort that it takes and then I will illustrate by some examples How in different models one obtains different results? So let me let me first explain in the case of the sign process. So, okay, maybe two Stress immediately is there need for some Extra effort let me point out That so for sign process this multiple functional converges in absolute value excuse me in principle value as in fact that is written here But for example for every process it doesn't for every process this mode as this diverges because the airy particles Well, there are very few in the negative some axis. There are very many in positive some axis So of course there is a way out of it. There is some need of regularization and this is exactly what I aim to explain So what I'm trying to say is that in the so we proved Let me put it on the other blackboard for reference purpose. We proved the statement that if I Make a transition so so I have a space L and I have a space square root of GL So and this is the projection on the space L is denote by pi the projection of the space square root of GL is denote by pi G then as We discussed in great detail The Radon Nicodem derivative so the process the the determinational point process is corresponding to pi and to pi G are equivalent and the Radon Nicodem derivative is precisely the multiplicative functional corresponding to the function G the point however is that it is often necessary one often it is often necessary to Consider situations when these multiplicative functionals understood literally diverges and this is the Specific example is the example of the airy of the airy point process where such multiplicative function will simply diverge There is a need for regularization. Yes Is an infinite configuration? Yes, they could be say again. Excuse me. No, of course not. It is a it is an infinite configuration. Yes No, no Yes Say again, excuse me. Yes. That's quite precise That's exactly what I'm coming to So I'm answering your question right now In fact, the statement will be the statement will be understood almost surely the convergence will take place almost surely That's right. We'll take place for almost all configurations. And so as you say as you say one needs to Play some conditions on the kernel in order that the convergence take place almost surely with respect to this determinant point process But in fact as I say There are some nuances because there are some nuances. So this infinite let me put it this way So this statement is completely universal. It's also true for airy kernel is true for many examples on the other hand the multiplicative functional Might diverge and does diverge in examples as it diverges for the airy kernel So it does converge for the sine kernel. That's why I started with this example But for the airy kernel it diverges So I will need to have a procedure of Irregularization of this multiplicative functional, which I am now going to explain. Yes, and I have an answer for this Yes Yes, here it is. So the axis are The axes are distributed according to the underlying determinant point process the axes are a realization of the underlying point process of The let's say of the sine process So the realization of the sine process and I will explain the property of the Sine process which ensures that almost every realization Is such that this product converges in Principal value in this case The measure on the t's is of course random because in fact that is quite precise. So because x is a random configuration Of course the condition that's quite precise a conditional measure is a random measure because the condition itself is random That's quite precise. Of course, that's exactly right. It's a random measure. That's quite precise. Yes So it's a random measure. Yes. So first I fix a realization of the sine process and then so which is a random realization Then to this realization is Assigned the conditional measure That's quite precise. Yes. Okay, so I proceed so and let me explain So let me explain. So what is my purpose now my purpose now is to give meaning to this Expression even when the multiplicity function itself diverges so we shall see right now That's what I will explain a little bit briefly that it is possible to have This formula still to be true even when the multiplicity function itself is not defined but is defined a weak version of multiplicity functional and Regularized multiplicity function so and this is precisely what I'm going to explain and then I will also illustrate by some examples Which show how regularization of multiplicity functionals Takes different forms for different models regularization of multiplicity functions So let me just start with formula So I want to consider multiplicity functions But multiplicity functionals are Laplace transforms of additive functionals and so I let me start with additive functionals and so Let me first regularize additive functionals and for this let me point out that if I have an additive functional so again these formulas we saw in Pavel's talk and also in other Talks, so if I have an additive functional Then the expectation of the additive functional is of course the integral of F DDT Let me write off. Why do I so for the for this variable while The variance and this is a formula that we saw several times The variance of SF is In fact the double integral of F of X minus F of Y square times pi Of X of Y square DX dy well to put it rigorously D mu effects D mu of Y here I am in this formula holds in the full generality. So I have that is to say a projection pi From some L2 in you onto some subspace L and the corresponding Determinant all point process be by This is a direct verification using well in fact the definition using in fact the definition and In fact Pavel wrote to us yesterday a more general formula for arbitrary point process For which this is a corollary because the first term in the general formula is zero in the determinant case okay, so One can see by looking attentively at this formula that it is quite possible So this is essentially a so belief one-half norm So in many in many yes, no yes. Oh, there is there is of course. Yes. Thank you very much. Yes. Yes. Yes. I wrote it I wrote it In yes, of course. Thank you very much. Yes. Thank you very much. Yes Yes, in fact I wrote it incorrectly so The variance is essentially so belief one-half norm But it is very easy to give an example of a function for which which fails to be Integrable but for which so belief one-half norm is finite It is very possible to have a function which is not integrable But for which so belief one-half norm is finite and to such function So if F So, let us write the following let us consider the map F Going to SF minus expectation of SF The letters denoted by SF SF bar and so then this map SF bar is well defined is Well defined Can can be extended by continuity for all F For all F for which The right-hand side of star is finite, but these functions may or may not be Integrable and in fact the function one over X is precisely is precisely an example a case in point where this integral is well in strict sense not defined But this integral is perfectly well defined and so precisely one can define The regularized multiplicative functional at this point. So at this point a regularized multiplicative functional a regularized multiplicative functional Let me write it this way expectation of exponential SF let me write it in this way is Can be defined as in fact Well clearly I define it How do I say I define bar Times Shell of integral of F Well, this is a tautology But except it is possible Let me let me write it this way. Excuse me. Let me write it this way. So this is a tautology except This side can be can be well defined in Situations in which neither this nor this is well defined So this is exactly the same situation as what I wrote in fact for for the Hilbert Carleman Hilbert Carleman regularization of the determinant So this is precisely the same situation this determinant can be well defined when neither this nor this is Well defined when either this nor this is well defined So and precisely this this regularized multiplicative function as well defined even in situations when neither this nor this well defined by a certain limit transition So in particular if so in particular Yes, I'm doing it. I'm doing it in particular exactly in particular if a Colonel if a Colonel pi on our on our Satisfies satisfies what satisfies that the integral of pi xx One plus x squared is less than is fine. It's so observed that it is x square Yes, so for example the Eric Arnold satisfies this Then the regularized then the regularized multiplicative functional the regularized multiplicative functional multiplicative functional is well defined So the regularized multiplicative function is well defined So product regularized x minus p square Well defined. So is well defined this one with respect to the measure P Q of course. Yes. So there are there are there is a there are two nuances here So there is a nuance. There is a nuance at zero. So at Q at Q Yes, so The Function obviously has a zero at Q This however is not a problem because let us recall that the function P Q XY the for the formula of Shirai and Takahashi that we wrote last time here it is so this formula Shows to us that in fact one can see that they On the diagonal on the diagonal at Q The kernel has a zero in fact a zero of order two Not only does it have a zero it has a zero of order two. So there is no problem. There is no problem with defining The functional There is no problem with the neighborhood of zero the problem is Perhaps with the convergence the problem is with the convergence, but again using this regularization using this regularization and Using this formula using this formula one Can't define the regularized multiplicative function and So the general the general form of this statement. So in particular, let's even look at this case. So obviously again For a function one over X for function one over X this quantity is completely finite For function one over X this quantity is completely finite So the regularized multiplicative functional is well defined whereas the non regularized isn't and so the The formula holds in Complete generality the formula holds in complete generality With a certain correction so in full generality, let me now formulate. So let me now formulate the general The general case of the formula Let me erase the procedure of regularization So the general The general case of the formula the general form of the formula. So here I Consider a kernel having integrable form or equivalently a Kernel satisfies weak division axiom weak division axiom, so if phi is an L of P Then Phi over X minus Pi is an L So consider this form. So the other the other direction is enjoying to work with Roman Romanov So in general with Roman Romanov, okay So we consider this integrable kernel and we have this regularity assumption I should say this regularity assumption is sometimes not Verified I should say so there is technical difficulty. So let me Restrict myself in this discussion to one dimensional real case But let me point out that this this formula this is not verified in one-dimensional complex case The result is true, but this assumption doesn't hold in fact Let me allow me please allow me to restrict myself not to give rigorous formulation, but to just limit myself to Non-rigorous formulation so in just in the same way as Regularization of determinant can go further. So the expression determinant 1 plus a can be regularized Further so one So one can write it as e to the trace a times determinant One to one plus a but one can also write it as e to the trace a plus one-half trace Exterior square of a times determinants three one plus a and the advantages the determinants three one plus a is at this point Defined for operators, which are not even Hilbert Schmidt, which are in search at in class and so on one can continue this game It doesn't seem necessary in applications at this. However, the case of So I mean going further hasn't been so far necessary in examples But the case when this example doesn't hold when this condition doesn't hold does arise in examples and may and namely when we consider the Determined by process corresponding to Fox space which we saw several times at this conference when we consider Just similar complex situation one-dimensional complex situation So let me write it here. So the kernel is Z w is e to the Z w bar minus episode square square To so this is just the Kernel in the fog as they're producing kernel of the Fox space So what does mean Fox space it is space of functions square integrable with respect to the Gaussian weight So it is generated by a monomial z to the k and so the generating function is like this Okay So in this case this assumption will not hold all the formalism go through but regularization must go Even one step further must go to the third to determine on three and this was done in joint work with Yanxi she So, okay, so let me just point out that Yes, that from here from here We have this so this just follows from the weak division axiom. So there is no question about this There's excuse me. I wanted to say excuse me. So this also, but I wanted to say this so so the expression as the expression L of P is equal to X minus P X minus Q L of Q this this just follows directly from the weak division X from the weak division axiom and so using their regularization of Using the regularization of multiplicative functionals one is able to Obtain this formula In the same way for regularized multiplicative functional So even if even if regularized multiplicative functional exists in this formula still holds this formula still holds and Then one obtains this same formula, but for regularized multiplicative function regularized Functional and then one obtains this formula it looks the same But with important difference, so let me write it P it looks the same But here I put regularized so not principle value anymore. It's just regularized multiplicative functional So again regularized in the sense that the exponential of the trace is subtracted So this so in fact meaning of regularization is very clear This product diverges But if you subtract exponential of the trace of the logarithm it converges So this is the meaning of regularization and there is just one more Correction term. So I wrote dti. In fact, I have to write Row of ti where row is a function which only depends on Row is a function which only depends on so there is a way. Let me I wrote roll Let me denote it w actually w by excuse me w So there is this weight function which only depends on So and this so in this generality one has this general formula with regularized multiplicative function and with this function w So this function w in some examples, it's possible to find it. So let's consider examples. So let me consider an example Example so I will consider two examples one example without regularization and one example with regularization So example without regularization is example of the Bessel kernel So Bessel kernel. So this is the kernel which again we saw several times in The talks today During this conference. So I write Square root of x. I write it in slightly different form, but it comes down to the same yes Yes, and also it is even more convenient to write the Bessel kernel in the following form Is the integral of one fourth? Zero to one J s square root of Tx J s square root of T y DT Okay, so what does it mean Bessel kernel the Bessel kernel is the kernel of It's very similar spectral projection very similar spectral projection To the sine kernel so the sine kernel is the spectral projection on paleo inner space Which is the space of functions such that the Fourier transform is supported in minus pi pi Support of the Fourier transform is in minus pi. So this is the Space for the sine kernel for the Bessel kernel instead of the Fourier transform has to consider the Hankel transform H s f of lambda is equal to one half integral from zero to infinity f R J s square root of lambda r D r and by the way in many sources for example on Wikipedia if you read The definition of Hankel transform it's for some reason Reason written s bigger than one half and they could never understand why people write as bigger than one half In fact, it's perfectly well defined if I says bigger than minus one So it is really a source of great mystery to me why why people write s as Bigger than minus one half as bigger than minus one strictly bigger than minus one It's Hankel transform perfectly well defined and it's involutive So the Fourier transform you have the Fourier transform and the inverse Fourier transform, but for Hankel transform It is just an involution It is just an involution. It's wonderful transform So it's obviously just as the Fourier transform. It's a spectral projection Corresponding to the operator of the Bessel equation Let me not write it down. We don't need it for the purpose of this class the point is that of Course well the kernel is integrable here. It is you can see that the kernel is integrable The kernel arises of course a scaling limit of orthogonal polynomials Indeed which orthogonal polynomials? They for you one can consider either Yacoby polynomials Yacoby polynomials or Laguerre polynomials for computations It's often easier to work with Laguerre polynomials But the Yacoby polynomials come down to the same. So this is classical asymptotic of Heine-Meyler Heine-Meyler asymptotics 1847 Just Yacoby polynomials converge to Bessel function and one can see it very nicely Yacoby equation converges to Bessel equation So there is spectral projection There are so excuse me just as a remit polynomials converge to the sine function Yacoby polynomials converge to the Bessel function So it shouldn't very greatly surprises that the Christopheles der Buchernel converges to this kernel So the kernel has integral of form So again, it shouldn't very greatly surprises that the weak division axiom holds. Well, it comes down to the same and At this point also the relation between palm measures holds. So this is just a corollary of the weak division axiom That's corollary of the weak division axiom and at this point also this result holds in fact here There is no regularization the multiplicative functional converges in normal way So in fact here they one can check that the integral of Js xx over 1 over x 1 plus x is finite Is finite one can check this So this is maybe clearest from here because one can see that there is a certain power decay for the Bessel kernel it decays in a power way. Well, it's also clear here so one can maybe easiest if one writes it with derivatives and Then it is still clear that there is a little bit of power decay and Which however is enough to have this Formula so all the products converge Literally and the function rho pi W pi of t is equal to t to the s is equal to t to the s The function w by t is equal to t to the s Okay, so this much this much for the Bessel kernel it all holds literally here So second example, I would like to consider is the gamma kernel the gamma kernel of bradyn and Alshansky Where in fact regularization takes a certain form So gamma kernel Yes, yes, sorry excuse me second Very excuse me W okay, this is very good question. So in this case. So, okay, so the function comes from okay This is very good question. So let us let us look at this formula for orthogonal polynomial ensemble as we wrote last time So there we had Regularization this is excellent question. So there we had the formula So there we had the formula So let us recall for orthogonal polynomial ensemble orthogonal polynomial ensemble. We had the formula dp Over dpq Is the weight of the ensemble the weight of the ensemble at P over k and Of wq over kn q q Times this product So this is how the formula looks for the orthogonal polynomial ensemble. So to get the weight one needs to take the limit of these We could take the weight one needs to take the limit of these which is trickier if one has regularization So if one doesn't have regularization it is straightforward It has regularization. It is tricker Okay. Yes, that doesn't make sense. Yes Okay, so Now Let me just say there. So let me introduce the gamma kernel of Brodin and Dalshansky So obtained as limit of that measures about which Pierre spoke to us on Monday and Just discreet kernel on Z So it is again an integrable expression with Gamma functions, so I will write a Z that prime over square root of gum Z plus X that prime effects I write it in this way So that this expression is well defined as we saw in the talk of Pierre either when Z is equal to Z That prime is equal to Z bar or when Z that prime all lie in the same interval M and plus one There is also a discrete series, but let's skip that For the time being by the way what Pierre did not say, but it should be said that The Z measures arise In the representation theory are intimately connected representation theory of SL2R So the terms principle series and complementary series and discrete series these are in fact principle representations of SL2R Complimentary presentation of SL2R and so forth. So and I write So the gamma kernel K Z that prime is the So this is the gamma kernel So and this kernel as I said it it arises from a measure on Partitions so observe that it has integrable form. So by definition The kernel has integrable form it arises from measures on partitions. So it should not very greatly surprise us that There are very few particles on the positive semi-axis and very many particles on the negative semi-axis in such a way that KXX is one over X as X goes to plus infinity and KXX is one minus one over X as X goes to minus infinity. So again, let us recall that we are having Just young diagrams and in young diagrams. There are very many holes very many particles here Okay, as usual I miss. Yes. No. No, I wrote correctly. Excuse me. No, that's quite precise. I wrote correctly. So I have not To my own surprise. I have not confused particles and holes. So, okay Okay, so there are very many holes here very many particles here and so this is the gamma kernel and so clearly This multiplicity function is not defined The regularized multiplicity function, however, is defined and regularization will in this play in this case Regularization will take the following form So regularization regularization will take the following form to regularize the quantity when there are too many particles for things to converge One regularizes it in the following way So zip I Take product of g of X over particles so X bigger than zero and Then I take inverse of Y over holes Why less than zero so here is X in X Why not in X? So on the so in the positive semi-axis, I have very many Particles, excuse me the opposite. I have very few particles. I have very few particles So I can take product over those on The negative semi-axis on the other hand. I have very many holes so I can take product over those in fact when I subtract the exponential so Let me let me formulate it like this when I subtract exponential of the trace I subtract a term corresponding to the set where corresponding to the configuration where all these are particles I subtract all so to speak all the particles So what remains is where I have the holes? So and this is the regularization for the multiplicative functional, and I think I mentioned and Let me say this again I mentioned that this whole subject started from a question of Alshansky in fact Alshansky in 2011 Treated the gamma kernel so Alshansky Treated in 2011 the action of the infinite symmetric group on the gamma kernel on the point process with a gamma kernel And he proved this formula for this specific G. He proved this formula so for this specific case by taking limit transition from young diagrams, so in fact the the Expression this expression can be written down explicitly for young diagrams, so he has that measure one can then write explicitly this product expression and So he wrote this expression, and then he asked me what happens for the same process, and well here is the answer Okay, so here Finishing with the examples, please allow me to pursue a digression So in these two examples so the Bessel kernel and the gamma kernel one sees a One observes a phenomenon for which I would like to have a more Conceptual explanation, so the phenomenon is the follows so the Bessel kernels It's not just it's obviously well a family of Bessel kernels, but in fact it's not just a family It's a hierarchy of Bessel kernels, so let me point out that Js of xy is equal to Is equal to Js Plus two of xy Plus s plus one Square root of x. Let me write it like this s plus one Js plus one square root of x Square root of x j a plus plus one Square root of y square root of y Okay, so Bessel kernel with index s is a rank one perturbation of Bessel kernel with Index s plus two and so here well as speaker I get question But here I would like to ask a question to the experts in the audience so to every such Determinant open process there corresponds a integrable system unless I'm mistaken, so they They determine and as a gap probability the determinant is a solution Solution of some integrable system So my question is what so here I have a hierarchy of kernels What is the corresponding hierarchy on the level of integrable systems? Here's question. Just I would like to ask Okay, so Yes, so observe that they So corollary corollary that the sequence The sequence square root of r Js Excuse me. Yes square root of r Jr of Square root of x over square root of x Forms an orthogonal basis Forms an orthogonal basis An orthogonal orthonormal basis for the range of Js So the convenience of this orthonormal basis is that it's iterative. So it's a basis So our our Takes values. So our takes values s plus one s plus three and so on s plus one s plus three s plus five and so on so but observe that this basis is iterative this basis is iterative so Just if I take s plus two then there is one element less So well and clearly in fact it is also clear from this formula clearly Of course the Bessel kernels they go to zero as s goes to infinity. They will converge to zero. So there is everything is consistent and Let me just say that this This perturbative statement can also be expressed on the level of multiplicity functional So with the after this it should not be very surprising So in fact, I can also have with palm measures. I can also say That palm measure So the palm space. So let me write for this time LS is the range of Js So the palm space of LS LS of P Which is by definition. Excuse me. Let me write it here. So LS of P as always the palm space LS of P the palm space that is to say F in LS F of P equals zero So I can write LS of P is in fact equal to X minus P over X L s plus two So this is formula I can write so in fact, there is a there is a Recurrent this recurrence relation it makes perfect sense in terms of palm Theory so the next vessel kernel is obtained by adding a particle to the to the previous vessel kernel In fact, you can see it's a rank one perturbation It's rank one perturbation. So I add one more particle to the previous vessel kernel. Here it is, right? so that is to say that the palm measure of this one is equivalent to this one Palm measure of this one is equivalent to this one Okay, and so here it can be written explicitly and so then there is a corollary which is published in Proceedings of the Stegloff Institute. So Just the corollary in 2016 the corollary is that well, of course the corresponding measure the palm measure At point P is absolutely continuous with respect to the palm measure Excuse me with respect to the measure with shifted parameter And the radonnik-Oedim derivative is just x minus P over x Square x in x well with some normalization constants that Here it is just normal normal product without any anything so normal product, okay, so this much for the vessel kernel and Let me also very briefly say that the same Result is obtained in recent joint work with Olszanski from 2019 also for the Gamma kernel, so let me let's think that we already remember the regularization the scheme of regularization Okay, I raise this also so for gamma kernel in joint work with Olszanski 2009 it's on the archive Olszanski and 2000 joint work from 2019 so just Same statement for the gamma kernel, so we can have also there is a little nuance here we can have an iterative basis, so let me write it down Is some constant of that that prime which let me not even bother writing down times A x plus Z times gum X plus that's One so I should say in the paper of course with Olszanski. We use the Notation of Olszanski where there are one halves everywhere so everywhere in all the formulas with every integer There is one half one half one half everywhere, so I skip them in this presentation so just There is this basis and then one can write K X Y K. Yes, that's a prime X Y So for the gamma kernel one can write this representation explicitly. It is an inductive representation GM Z that prime Of X GM of Y so so The sum is from emical zero to infinity This gives in turn So this basis is orthogonal basis It's orthogonal basis for Z equals to Z prime bar So in principle series, it's orthogonal basis and this session is completely the same as in the Bessel case It's orthogonal. It's orthogonal basis and so orthogonal inductive basis in Complementary series. It's not orthogonal basis. It's by orthogonal basis. It's not orthogonal is by orthogonal basis Okay, however, this may be one obtains a recurrence relation very similar to the one for Bessel K Z that prime. Okay, let me write it here. I erase the definition of the gamma kernel. We already remember it so let me write it here so the K Z that prime X Y is equal to G zero Of X G zero of Y. Well, obviously that's at prime so and there is there is a Well the next kernel plus K Z plus one that prime plus one but there is a little nuance for the gamma kernel namely there is a shift so there is a shift and a twist a gauge transformation a Z that prime of X a Z plus one that plus plus one of Y Excuse me of X. I wanted to say and the other the same thing of Y a Z plus one that prime plus one of Y a Z that prime of Y So this is just the formula for gamma kernel observe that well it is a little bit twisted this twist is twist by function of episode value one in orthogonal case, but in a non orthogonal case this function is not of absolute value one And also by the way, I take advantage to ask a question to the experts in the audience So what would be the integrable system corresponding to the gamma kernel? What would be just the gap determinant for the gamma kernel? Is it possible to is it possible to write? I don't know pen live a equation which corresponds to it So or is it possible to write? Some how do you say the integral systems that lies behind this kernel? Okay, and in particular so from this hierarchy. We also obtain again that the palm measure the palm measure, so let me just not write the formula Let me refer to the pre to the preprint but again the palm measure the palm measure of gamma kernel measure of the gamma kernel of P K z is Equivalent up to an explicitly written multiplicity functional in fact multiplicity functional is always the same Multiplicate functional is the same just it has to be understood in a regularized sense is equivalent to PK Z plus one z prime plus one so there is the same hierarchy in this Gamma case also and in concluding this digression. Let me put a digression on a digression so Just let me say that with this With this hierarchy it is possible to construct an Analog of Determinantal measures, but which are not finite but sigma finite so and let me let me say this and with this I will finish so it is possible to construct an object which I call infinite determinant measures infinite determinant measures so in fact the need for this object came from the following construction of Bardin and Al Shansky who observed that the best cell kernel the best cell kernel arises in question of Ergodic decomposition of measure on the space of infinite matrices let For lack of time, please allow me to be very brief here So they consider a space of matrix a space of measures on matrices, which I will write now So this is a sequence of measures CN sequence of measures on just space of matrices Complex matrices, so let me point out that These measures are finite when s is bigger than minus one So finite So these measures arise in fact, so if s is equal zero, this is very specific measure This is measure. So what does it mean space of matrices space of matrices? It is in fact complex grass mania. So if you think about it think about complex Think think about manifold CN in C2n. What does it mean manifold of half dimension manifold of half dimension is graph of A map so this graph of a map is precisely a matrix n times n So grass mania so a flat coordinate on grass mania precisely identifies the grass mania With the space of matrices n times n complex matrices n times n complex matrix is identified with grass mania complex grass mania to Nm Up to obviously up to some negligible sets Then the canonical invariant measure on the grass mania is identified with precisely this measure, but with s equals zero So grass mania carries a canonical invariant measure. So it is homogeneous space u n cross u to n Double quotient of unit carries canonical invariant measure. So this canonical invariant measure under passing to the flat coordinate becomes this the canonical invariant measure of the grass mania Totologically is unitary invariant and Totologically projection Totologically if I take grass mania of dimension n plus one and project of grass mania of dimension n the measures the invariant measures We'll also project so one obtains that these measures are also projectively Equivariant the miracle is that the property of projection Equivariant preserves For all s so for this miracle. I don't have convincing explanation. It's possible to write some other measure on grass mania It's possible also to see this in terms of some Gaussian distribution And then somehow instead of Laguerre polynomials with s equals zero consider Laguerre polynomials with well s equal s something like this But I don't have conceptual explanation. Why these measures actually arise So but these measures do arise as this is sequence of measures projectively Projectively equivariant so that is to say for matrix of order n plus one I pass the matrix from order n Then the measures are one measure is taken on to the next measure by the way open question These are the only ones multiplicative in eigenvalues and projectively Equivariant open question. I have no there is some result of this type by Raskowski for That measures but for these measures nothing so open questions these that they are canonical in this sense However, this may be these measures exist on infinite matrices. So they're so this projective family of measures define Define a measure on space of infinite matrices and this measure Decomposing this measure into our godly components gives Bessel process Nonetheless so the it's very nice, but this measure can also be defined when it is infinite So if S is less than minus one The measure is still well defined measure still well defined still well defined measure still well defined so it will be infinite measure, but all Infinity will live only for some small values of n If n is sufficiently large then the fiber so as I mentioned we consider projective family of measures Then the fibers will have actually finite measure So the measure on space of infinite matrices will be well defined So there is question of our godly decomposition and it would have to be infinite measure on space of configurations and in fact Infinite measure on space of configurations can be constructed. So it has been constructed So this was question of Borodin-Alchanski So our answer to this question Borodin-Alchanski can be constructed by iterating this procedure still further It is possible to iterate this procedure still further so to perturbed a Determinant kernel to perturbed a determinant kernel by a non-square integrable function or in other words to consider a measure of which measures are Determinant, but it the measure is sigma finite So in the fact it is possible to check is that perturbing perturbing determinant measure by Non-square integrable function considering perturbation by projection adding an unbounded operator not not perturbing it by rank one operator But perturbing it by unbounded operator Corresponds to adding a particle to my configuration, but this particle So to speak the probabilities of this particle approaches the edge grows and grows and grows and in fact is infinite And so instead of determinant measures one or by the way equivalent way of seeing this infinite determinant measure is the just product of Determinant measure by multiplicative functional which Converges which converges, but is not square integrable. So then the space square root of gl Will no longer be space of integrable functions the space will be well defined But it but the fun so it's quite easy to have some space of in l2 and then multiplied by some function in such a way Is that the space is no longer space of square integrable functions space of functions whose norm is infinite and to such object will correspond analog of determinant measure But which will be infinite which has all the properties of the terminal measures which becomes the terminal measure Undertaming so one can tame in different ways obtain different measures, but the object is one and to such perturbation of To such finite rank perturbation of Hilbert space by non integrable functions one obtain Finite particle perturbation of determinant measure and infinite determinant measure. Thank you very much