 Thank you very much. So let us, in fact, start with the Gaussian unitary ensemble. So let us start with H, a unitary n times n matrix, and a unitary, excuse me, a unitary Hermitian n times n matrix, Hermitian. So we endow the space of n times n Hermitian matrices with the Gaussian distribution. So if I write the matrix H ij, then I write, so each non-diagonal matrix, off-diagonal entry above the diagonal entry is a complex Gaussian, standard complex Gaussian, and just each diagonal entry is a standard real Gaussian. Each off-diagonal entry is a standard complex Gaussian, and each diagonal entry is a standard real Gaussian. So this is the probability distribution and then just the distribution of eigenvalues, distribution of eigenvalues of this matrix is given by the following formula. So let me write a normalizing constant here whose value we do not compute today. So just the distribution of eigenvalues is the following formula which we also essentially saw at the lecture of Alice. So just a product of the Van der Monde determinant times the product of weights. So and this formula precisely gives rise so we will be interested in this course mainly in the case of the square of the Van der Monde determinant maybe very briefly will have time to touch on the cases when in fact instead of Hermitian matrices one considers orthogonal or symplectic matrices and instead of two one has one of four but not today. So and just the aim today so I start today just by briefly recalling how the sine kernel arises in the analysis of Gaussian unitary ensemble in the analysis of Gaussian Hermitian matrices. So first step is to rewrite the Van der Monde determinant as in fact product as the Van der Monde determinant. So the Van der Monde determinant is indeed the Van der Monde determinant. So which can be written as determinant of any monic polynomial of family of any monic polynomials and which again by writing the determinant a product of determinants as determinant of product can be written. This is a purely algebraic identity as some p l lambda i p l lambda j where where p l are arbitrary monic polynomials and if the polynomials are not monic then there will be a constant in this formula. So but in fact so this is this is just purely algebraic identity which does not use anything so but in fact by the way is it possible to see here is it is it is the blackboard is this part of blackboard visible or not so much not so much so maybe I should have a blind spot here okay let me rewrite this here okay like this is better okay so l from 0 to n minus so however it is convenient to take p l orthogonal polynomials convenient take p l orthogonal polynomials and there is a reason for that so let us take just p k x Hermit polynomials that is orthogonal polynomials with weight with weight that is orthogonal polynomials with weight e to the minus x square over 2 and let us write this formula for Hermit polynomials which we are we can because in fact we can write it for any polynomials there will be a constant because Hermit polynomials are not monic so these are orthonormal orthonormal so let me introduce the corresponding kernel k n of x y of lambda i lambda j is the sum as I wrote p l lambda i p l lambda j and it is convenient for me it is purely a matter of notational convenience to insert the weight factor into the kernel so the weight factor into the kernel so times so I write e to the minus lambda i square plus lambda j square over 4 this is just a matter of notational convenience and now with this specific choice of the polynomials one gets a formula which I leave as an exercise but which we can discuss on the blackboard if desired so I have a formula just that the integral of the determinant k n lambda i lambda j from i j 1 to k plus 1 the lambda k plus 1 is in fact equal to n minus k the same determinant lambda i lambda j i j from 1 to k so and this this formula this is a key formula for us which is related to the fact that well on the one hand one can easily compute that the integral of k n lambda lambda d lambda is equal to n precisely because we chose orthonormal Hermit polynomials and on the other hand the kernel by its very definition and this is where it plays that we have chosen Hermit polynomials has the property k 1 lambda 1 lambda 2 is equal to integral has the reproducing property lambda 1 here it is the reproducing property and in fact 1 and 2 implies star this is this is star this is a straightforward exercise which I ask those who have never done it I ask to do so the point and this is precisely how determinants of point processes are born the point is that not just the probability density has can be represented as a determinant so it is true that probability density of the eigenvalue distribution of the Gaussian unitary ensemble of the distribution of Hermitian matrices with Gaussian independent entries can be represented as a determinant but the point is that the projections of this measure can also be represented as a determinant can also represent as a determinant so in other words the correlation functions so the correlation functions now let us let us look at the correlation functions let me recall the definition of the correlation functions in fact in this case let me just say and I will give general definition later that in this case by correlation function I mean precisely integral of probability density over over some of the variables some of the variables so correlation function of order one is integral of probability density with respect to all variables except one of them of order two is integral of correlation function integral of the density of all variables except two of them and so so the correlation functions have the form row l lambda 1 lambda l is the determinant of k n lambda i lambda j ij from 1 to l so the beauty of this formula and the usefulness of this formula is due to the fact that in the computation of else correlation function I only have determinant l times l determinant l times l so the so let us imagine that we consider a matrix million by million so this is some huge matrix with very many eigenvalues so this is some huge determinant million by million which it is absolutely impossible to compute at the same time what I want to know what I want to know is how many eigenvalues I have in some fixed interval how many eigenvalues I have in some fixed interval but for this I don't need to know the whole determinant I only need to know the first correlation function I only need to know row one of lambda lambda d lambda this is by definition so I need to integrate all of the superfluous superfluous variables so and precisely this correlation function is what I have from formula star which I did not prove but left as exercise so this is k n lambda lambda d lambda it is indeed it stands to reason that the expectation of the number of particles well it comes from the from the definition of our problem the expectation of number of particles is n as in fact it is here so just this is the first correlation function the second correlation function if I want to know how many pairs of particles belong to a given square so I need to compute the second correlation function but it is just the determinant of this matrix and so on lambda lambda k n lambda 1 lambda 2 k n lambda 2 lambda 1 but our kernel is symmetric and k n lambda 2 lambda 2 by the way already at this stage I would like to make a remark which we will exploit repeatedly in this course is that the second correlation function minus the product of the first correlation functions so well in the determinant there is a diagonal term and there are off diagonal terms so the determinant is the product of diagonal terms minus the off diagonal term so what I get is minus k n lambda 1 lambda 2 square d lambda 1 d lambda 2 so in fact our eigenvalues are what is called negatively correlated the presence of an eigenvalue in a given position exerts a negative influence on the presence of a negative value in another position it is less problem conditioning on the fact that there is an eigenvalue somewhere it is less probable that there will be an eigenvalue somewhere else so here it is the correlation is negative this is something that we will repeatedly exploit and in fact it stands it perfectly does tend to reason that eigenvalues repel so just in same way as for example roots of polynomial repel so it's also very clear from the formula with the determinant so eigenvalues don't like to be close together it is highly improbable it is highly improbable that eigenvalues cluster in a cluster in a small interval in fact I expect to prove in this course that if I have an interval I then the probability is that the number of eigenvalues is greater than some number k decays as e to the alpha k square where alpha depends on I so but let us not go into that just let us observe that the not the number of so the number of eigenvalues decays very fast in fact we will see why this is so it is possible to see this by looking attentively at the formula with the von der maun determinant so it has so it is very hugely improbable that eigenvalues cluster and the first representation of this we see here okay but now this was a digression and now what I want to say is that this formula naturally motivates one please observe that in this formula the only dependence on n the only depends on n is here in the appearance of kn so the structure of the formula the determinant structure of the formula does not depend on n the only thing that depends on n is itself the kernel kn so this setup makes it very natural to try to effect a transition to the limit to the limit as n goes to infinity so and in fact so let me quote a result which I will not prove in this course so the semicircle law of vignette and not only will I not prove it in fact I will not even formulate it so I will formulate it vaguely so the structure of the eigenvalues so the distribution of the eigenvalues of an Hermitian Gaussian matrix of the matrix of the Gaussian unit or ensemble so the eigenvalues are distributed according to a semicircle according to a semicircle so there are n so my matrix H has n eigenvalues so they live this is a theorem of vignette they live in the interval from 2 square root of n to minus 2 square root of n again precise formulation of this theorem requires effort and I won't but one can check one can now it is possible just even to Google the Wigner semicircle law and to find everything you want to know about it but just the eigenvalues live on an interval from minus 2 square root of n to 2 square root of n so there are n of them so the typical spacing between eigenvalues is of size 1 over square root of n and the distribution of eigenvalues obeys the semicircle density so which I write here so just the semicircle density here it is okay question yes yes excuse me just a second I mean if I now rescale so excellent point if I rescale excellent point thank you very much if I rescale so very good point if I rescale but I will need thank you very much I need this rescaled density so if I rescale the circle to become a circle of radius 2 then the distribution of eigenvalues obeys this semicircle density okay so now so this was theorem of Wigner now I come to formulating and at this point non-rigorously but in fact we will we will formulate it rigorously so a theorem of Dyson so Dyson is interested in the local statistics of eigenvalues so the theorem of Wigner can be seen as an analog in this situation of the law of large numbers of the law of large numbers so this is or this is a theorem of on limit shape so eigenvalues have a limit shape so the natural next question is the question about deviation from the limit shape and so the deviation from the limit shape is so Wigner play Dyson excuse me Dyson places himself in some position of this semicircle curve so it can be different positions and looks around him so looks around him so important thing is that he not place himself at the edge of the curve because then the picture is completely different but when he places himself in the bulk of the curve Dyson so he sees obviously the closest eigenvalue is a distance 1 over square root of n from him so he needs to make homotity so without even scaling so I wrote the formula with scaling but the picture is without scaling so Dyson is observing the picture without scaling so he needs to effectuate homotity with coefficient square root of n okay he does that and then it is clear to Dyson that he needs to look at the asymptotics of this kernel asymptotics of this kernel with this scaling so that is to say he looks at the kernel in the he takes some epsilon between minus 2 and 2 strictly so he looks at plus over square root of n plus y over square root of n so in fact he puts also the value of the density plays a role and then let me again say not rigorously but this statement can be made rigorous in fact it's a very classical theorem of sigo so every orthogonal polynomial in the bulk it looks like sine function so it is can be proved in very substantial generality and orthogonal polynomial of order n has n zeros and oscillates between them so it stands but it perfectly stands to reason that it should look like sine function so and in fact it does so in this scaling a remit polynomials behave as the sine function this is called the planche rel rota asymptotic and so the limit of this so I always read this formula with mistakes so I check with sources so the limit as n goes to infinity okay so is in fact it has a limit it has a limit and in fact so as I said by classical asymptotics which is true for very general orthogonal polynomial this is sine so there is sine and then there is another sine and then one obtain what one obtains is the difference of signs from the Christophel Darbu formula and then the difference of arguments so this proof of this requires quite nontrivial amount of effort but let me just at this moment write the formula and look at it so under the scaling when Dyson observes the behavior of the eigenvalues in any position in the bulk of Wigner's semicircle curve Dyson sees a limit distribution for rescaled eigenvalues and so the question arises so does there exist such distribution of probability on so obviously here n goes to infinity so it cannot be a collect finite collection of eigenvalues must be infinite collection of eigenvalues so does there exist such distribution of probability for infinite on infinite collection of points on the real line because clearly Dyson he sees not just the closest eigenvalue to him he also sees the closest the second one the third one the fourth one and so on so he sees a full collection of eigenvalues and the correlation function between this collection of eigenvalues are given by this formula and so after these preliminaries we are ready to formulate rigorously one of the main definitions of the course this will take a little bit of time so let because in fact to formulate to define the Dyson sign process first of all one needs to define what does it mean point process so what what is the meaning of the word point process and I will do this now so in their classical introduction to point processes daily and via Jones trace the seer of point processes to the work of John Grant in the 17th century so John Grant around 1615 just was the first to compile mortality tables in the city of London so it was in fact a first sustained effort at studying demographics of London in particular he observed that growth of the population of the city of London is three times as high as the growth of the population of the kingdom of England plus a chance plus elements rules and so and he observed also how it changes by neighborhood and so forth so just the point is that he was investigating a sequence of indistinguishable random events at this in this setting deaths in the city of London so indistinguishable random events which happen in his case in in a in a certain span of time so we consider therefore we are interested in the distribution of probability in the space of subs of subsets of collection of points in our let's start with the case of our so a collection of point in our is called configuration on our so we introduce the space of configurations on our is the space of sets x is a set without accumulation points set so x I will write so this will be called the configuration and this will be called a particle of a configuration so just x does not have accumulation points so this is our space so this space has it's a Polish space it's a complete separable metric space in in particular because in fact this is can be viewed as space of measures so to x one can assign the Radon measure sum of delta x it's a Radon measure in the sense that it is science finite weight to every compact set by definition and then this space inherits the topology of the space of measures so the space of measures naturally has the vague topology topology of convergence on completely supported functions which turns it into a metric space and so this space inherits this topology informally this topology can be explained in a very simple way what is neighborhood of configuration what does it mean neighborhood of configuration so I can illustrate it by drawing what does it mean neighborhood of configuration so take an interval take an interval so here the points oscillate little bit oscillate little bit and here you do what you want do what you want so this is neighborhood of configuration and here also do what you want so so it's a neighborhood is defined by so one two configurations are close between themselves if they coincide if they're close on a large compact set and beyond this compact set they do whatever they want so this is this is our topology which in fact as I said comes from topology on the space of topology on the space on the space of measures topology on the space of measures okay so let me also say that in fact I introduce this topology but I don't really need it very much because in fact I can consider just the Borel structure and the Borel structure is given by what's called occupation variable so I write hash a is a Borel subset of r Borel subset of r and so hash a of x is the cardinality of intersection of a and x so and the Borel structure is induced so Borel structure on so this is a proposition so Borel structure induced by induced by the collection of all hash a coincides with the Borel structure introduced by the metric coincides with the Borel structure induced by the metric so this is something of which one can convince oneself and the the point is that in order to define a point process it suffices to define joint distributions of these hash a's so a point process is uniquely defined a point process is uniquely defined once so let me write this a point a point process is uniquely defined while once the joint distributions of hash a's are specified a point process is joint once joint distributions joint distributions okay so okay so now I am ready to formulate the definition of the correlation function which I haven't yet so a correlation function I will so if excuse me so point process is just a Borel probability measure so p is Borel probability measure on probability measure on the space of configurations so this is the same by definition as a point process on r okay so now the correlation functions of a point process are just defined as follows that just I consider a function f continues with compact support f continues with compact support on r k r l and I consider sum of f x i1 x i l so this sum over all choices of all ordered choices all ordered choices of l distinct particles so this integral over the space of configurations should it converge is by the least representation theorem is the integral of f y1 y l and in fact uh so it is integral of f with respect to some measure and this measure is precisely the correlation measure so this is the definition of the correlation measure and in all our examples the measure will admit a density with respect to the Lebesgue measure and the density is of course the correlation function so so again there are many delicate points here which I will only pass only very briefly so because in fact in all our examples all these conditions are verified so it is not there is absolutely no reason why correlation functions should exist so it is just so happens that in our examples they exist and the fact that correlation functions determine a process uniquely uh is a fact which is related to the correct well-posedness of the moment problems and so let me leave a little exercise which we can discuss if desired next time is that correlation functions determine joint moments not joint distributions but joint moments correlation functions determine joint moments of the occupation variables joint moments of joint moments of h1 hl and so at this point if the moment problem is well posed which in this situation it is it follows that the correlation functions are uh excuse me the correlation functions determine the process uniquely and so I am ready to formulate one of the main definitions of the course the sign process the sign process is a point process on r point process on r whose correlation functions whose correlation functions have the form equals to determinant s xi xj where s is s of x y is this so this is the definition of the sign process there is a very naive question that arises here why does the sign process exist yes there is an integral here integral there is an integral yes yes there is an integral yes this this is what is missing yes thank you very much dp of course thank you very much yes thank you very much yes yes there is an integral yes something else missing maybe so okay so just this is point process on r whose correlation functions have this form so a question arises why does such point process exist why does such point process exist so from this vague discussion I more or less explained why this process should it exist is unique but in fact also I will give different proof later maybe even this time but the question why it exists it remains a question which we will discuss in this course so in fact full proof of existence appeared much later than in fact the sign process was written sign process was written in the 60s then in the revolutionary paper Odile Mackie a French physicist she suggested a general determinant model for this kind of processes which she called fermionic process and which today following Borodin and Dalshansky were called determinant processes but she did not prove the existence of the sign process and in fact the proof of the existence of the sign processes already result of the new millennium of the work of Soshnikov and independently and simultaneously Shirai and Takahashi so I should say today the proof can be given on two lines and I will but just for the moment I want to point out that the fact that such process exists is a question so it's not obvious at all why such process exists okay so let me just say what we want to do with the sign process what we want to do where where we are going with the sign process and so let me now make a little jump and explain some of the dynamical properties of the sign process so let me point out that I consider not the finite particle case but immediately I consider the infinite particle case so I will in this course I will study the sign process and it's well many other determinant process but I will study directly the infinite particle limit so and first statement that I would like to formulate is in fact a central limit theorem for the sign process so let us consider so one can ask how does this infinite configuration how does this infinite configuration behave how does this infinite configuration behave how does this infinite configuration behave so and the one of results about this is the central limit theorem which was proved in this specifically for the sign process by Kostin Levowitz and in full generality by Soshnikov so is the following statement so let us consider first the number of particles for the sign process so let's consider let's denote hash n of x is just the number of particles in the interval from 0 to n so it is clear that the expectation is n plus one the expectation so please observe that the intensity of the sign process is one so the first correlation function is identically equal to one is identically equal to one well this is why we chose scaling in such a way so it's a stationary process it's a stationary process so the expectation of n is equal to n plus one on the other hand one can already see the phenomenon of repulsion which i briefly mentioned before one can see a manifestation of it in the fact that the variance of this random variable grows very very slowly the variance of this is in fact one over pi square log n plus o large of one this is o large so so the variance as opposed to let's say Poisson process so as opposed to situation where points are thrown independently on the line so the variance grows very very slowly so the configuration is very ordered so and this is just specific manifestation nonetheless we will see many many manifestations of this so uh nonetheless uh nonetheless so as Kostilevitz and Soshnikov proved the quantity hn minus expectation and over the square root of the variance converges to the normal law converges to the normal law okay so this limit theorem of uh Kostilevitz and Soshnikov it has a functional analog so function alogo which we proved in joint work with dimov uh just there is also analog of this statement for um in space of functions but in fact it is different from uh don's current variance principle so in fact i consider the quantity htn so and uh i need to consider uh so i write minus so this normalized quantity variance and then i need to consider the integral of this dt so if i consider just this quantity there is no convergence there is no convergence in to a uh limiting process so there is no convergence in the space of continuous functions but on the other hand if i consider the integral then this quantity does converge in the space of continuous functions uh and so let's denote this ksi tau so ksi tau converges to a Gaussian process converges to a Gaussian process which one can find explicitly process for which one can explicitly compute all the quantities by the way i should say this result we proved it for sine process but the method is specific to sine process and even for example for the process with the error kernel we don't have a proof so it seems that it should be it should hold uh in great generality because uh somehow it is related to the Gaussian free field and so it should always hold but we don't have a proof we don't have a proof even for uh error process so then uh the sine uh process has a remarkable property of rigidity of gosh and Paris rigidity which is uh quite remarkable so uh the Gaussian Paris rigidity says that uh if i fix the configuration of the sine process if i fix the configuration of the sine process beyond a certain interval i fix the configuration of the sine process beyond a certain interval then the number of particles the number of particles in the interval so the number of particles in the interval belongs to the sigma algebra to the sigma algebra which is spanned by events a a beyond the interval so this is quite remarkable statement uh quite remarkable statement that just uh if i close this interval then it is possible to determine the number of particles in this interval by looking just at the complement by looking just at the complement of the interval okay and let me point out that this statement can be in this specific case in the stationary case this statement can be derived from theorem of Kolmogorov from 41 about uh interpolation for stationary processes so uh in fact Kolmogorov has a spectral criterion for interpolation of stationary processes and it is possible to derive the Gaussian Paris uh statement from this from the Kolmogorov theorem okay so uh and let me just say that the number of particles is fixed but their positions are not the number of particles is fixed but their positions are not and in fact it is possible to compute this is another result that we will discuss in the course it is possible to compute the distribution of the positions of these particles so their number is fixed so if i have the fixed particles x here then the conditional distribution conditional distribution t1 of t1 tn with fixed with fixed which is the intersection of the configuration to the complement of the interval so this conditional distribution so this is essentially one can say that one we are discussing the analog of the Gibbs property for the sine process and in fact the sine process does satisfy the analog of the Gibbs property and uh the analog of the Gibbs property uh just with the restrictions that the number of particles is fixed so the number of particles is fixed but then the Gibbs property holds and in fact the potential of interaction is well what Alice mentioned uh in the first so the the Coulomb potential in the first class the Coulomb potential and just I formulate the result precisely the conditional distribution is equal to it's an orthogonal polynomial ensemble so I have the square of the undermand and I have the weight and the weight is given by the following product 1 minus ti over x so it's a double product product obviously over i from 1 to n and product over x in x without i and this product is taken in principle values because otherwise it fails to converge there is a square here too so and just this is the formula for the conditional measure this is the formula for the conditional measure and um let me just say that how can one understand how one can understand this formula very simply uh this is a formula as if the sine process were an orthogonal polynomial ensemble with uniform weight and infinitely many particles so if one writes for such formula for conditional measures for orthogonal polynomial ensemble one will get multiples of this kind so here one can formally pass to the limit on the condition on the condition and in fact obtain the formula so this product converges obtain the formula as if the sine process were an orthogonal polynomial ensemble with infinitely many particles and just the formula is correct the formula is correct for the sine process and we will discuss it in detail in the course so and in the remaining 10 minutes let me say uh just the following that from gosh Paris rigidity gosh uh derives a remarkable corollary for the sine process the corollary he derives is the following that if one takes so to motivate this corollary let me point out that the sine kernel is the kernel of projection onto the palevenor space onto the space of functions with compact support space functions whose Fourier transform is in compact support so gosh proves using the gosh Paris rigidity that the family of functions e to the i x t so where x is in x is realization of sine process and t is a formal variable t is in minus pi pi so this function this function sequence is complete is complete in l 2 of minus pi pi and in fact it was conjectured by lines in Paris that such result holds in full generality for determinants of processes and this is what we proved with Jan Tissue and Alexander Shamov and let me formulate in the time that is left let me formulate the result in this in a specific example which in itself is very beautiful and is due to Paris and via so let us consider an example which is very different from the sine process and which just shows how different very different problems lead to the appearance of determinants of processes so I want to consider so in these last five minutes we start completely afresh so one can forget everything that went on before just I consider I consider a unit disk I consider a random series sum a n z m so where a n are independent complex galshians independent complex galshians standard complex galshians standard complex galshians so of zero expectations and unit variance so and I consider this zero set so this is a random function I consider this zero set of this so zero set of this holomorphic function from your favorite formula for the radius of convergence of a power series it follows that the radius of convergence of this power series is equal to one so this is a holomorphic function in the unit disk so it has a collection of zeros and the correlation function for this collection of zeros so is again determinant and has the form so let me let me just write down this so has the form is just determinant of k x i x j where k is a bergman i is the bergman kernel please observe that we are now in a complex situation and please also observe that what does it mean bergman kernel bergman kernel is projection on from the space of square integrable functions the disk to the space of holomorphic square integrable functions the disk is a bergman space so bergman space is the space of holomorphic f is holomorphic so there are some or so orthogonal basis is given by the monomials z to the n the norm of the monomials z to the n is k plus one and so one arrives at this formula for the reproducing kernel or for this now I still have five minutes so for the reproducing for the reproducing kernel of this space and so in fact the the distribution of zeros of this x is a determinant process or with this kernel here it is so this is correlation functions correlation functions of our point process correlation functions of our point process so and this is a very beautiful theorem of Paris and Virag from 2003 so the experts in the audience can correct me I unless I'm mistaken there doesn't exist a simple proof so there are three line proof of this theorem so all proofs at least all the ones I know involve heavy computations and maybe and my favorite one is the proof by Krishna poor in which he realizes these zeros of a random Gaussian series as eigenvalues but obviously not of a unitary matrix eigenvalues of unitary matrix lie on the unit circle eigenvalues of a corner of a unitary matrix if one takes unitary matrix cuts out a corner and takes the limiting appropriately then one gets this so but still even the proof of Krishna poor requires some non-trivial computations and it's so this beautiful statement still lacks a simple proof so just let me point out that from the formula of Paris and Virag it follows that the zero set is invariant under Lobachev skin isometries it is useful to consider the unit disk as their Poincaré model of the Lobachevsky plane and so Lobachev skin isometries act on it and preserve the distribution of x they do not preserve the distribution of the function itself so and the statement which we proved in this case so in joint work with Shew and Shamov so is that so x is a uniqueness is a uniqueness that that for the Bergman space for the Bergman space this theorem is a one can say a cousin of a theorem of gosh and in fact it answers a question asked by lions and pairs for the Bergman space so this means that if a function if a function f is in Bergman space is zero in restriction to x then f is zero identically and this is just a theorem that we proved so and which we will discuss in the course in great detail and just to close for today let me formulate the following result from work in progress with Shew so that just well in this theorem we show that if a Bergman function is zero in restriction to x then it is zero identically so this means that a Bergman function is uniquely determined by its values in restriction to x but how does one recover how does one recover a Bergman function from its restriction to x and in fact we give an answer it's a partial answer for reasons that we'll discuss in the course but let me just formulate the answer so that almost surely the function can be recovered as follows f of z so it can be recovered by Patterson-Salevan construction so I write the corresponding Poincare series so d is the Lubachev's condition so yes so some x and x some has to be understood in principle value in the sense that one sums over over annuli so here I take the sum of these so this series this series converges is f is if s is bigger than one and diverges if s is equal to one so I take limit as s goes to one so in fact this is limit some sequence along some sequence and so then through this formula almost surely one recovers the values of a Bergman function from the values of its restriction on realization of our determinant point process thank you very much