 Okay thank you for reminding me really completely forgot the microphone. Okay so let me then repeat. Here I just summarized the bits from the previous lecture which I will be using and what is written here so suppose we know and we discussed its shape the joint probability density of n eigenvalues of matrix from real Geneva ensemble some of these eigenvalues can be real and others come in complex conjugate pairs and we are interested in calculating in particular one of the most important object in random matrix theories just calculating the marginal densities or frequently called also eigenvalue correlation functions so we just integrate out part of these variables so all but n so this index n means how many remain upon integrating out and okay here should be a small n capital n is size of the matrix small n is the number of arguments remaining after integrating out and I discussed that it's given in general this ensemble shows nice integrability properties and the result is always given as a pattern of the matrix which is made of blocks two by two and as many as n square blocks n by n matrix made of blocks two by two each block two by two entries here have some particular symmetries namely to make necessarily profane is defined for skew symmetric or antisymmetric matrix and these entries satisfy corresponding properties making the whole matrix antisymmetric and entries of these two by two matrix they are made or obtained by integrating the kernel which I did not specify so far one of the first goals of this lecture will be to give you hints how one can get this or recover this kernel but suppose we know it then we integrate it with respect against the function which is written here which possess the property that it's antisymmetric with respect to change of z1 to z2 and other entries k and w are also obtained by some similar different by similar expressions I will need only g for my present goals so I do not give this expression that can be found in in the lecture notes so this is what we discussed last time and I mentioned that of course if we know this k then this k encapsulates all important properties but how to get it in the original papers by Bardin Sinclair by Forrester Nagao they were computed explicitly using a method of skewer orthogonal polynomials computation is quite technically involved ingenious computation but I I'd like to give you a bypass suggested by zomers based on some observation due to Edelman which allows to really to to get this kernel relatively cheap so what is this namely suppose and in fact this will be the main information that I need for my for my modest goals I'm only interested in the mean density which is just r1 so n equal 1 r1 is a function of z1 then obviously this will be equal to fafian of just a single block one one block in one one block by anti-symmetry these two entries are zero because they satisfy this problem so one one they are zero and so we have just g11 minus g11 so we know that pafian is just g11 so what is g11 g11 is just integral of this k still mysterious kn of z1 z then this f which is written above f of z z1 and we integrate over z so we just take this f substitute here integrate and we get over these two parts will generate two parts which I will write in the following way r1 c plus okay depending on with argument z that what is z1 plus delta of y1 okay not much space maybe better in order to make it readable to use next line so it's again I'm repeating r1 c where c stands for complex divided on z1 plus delta of y1 or r1 are are standard for real depending only on x1 where where these are r1c in particular just by straightforward substitution and integration is equal to kn of z1 z1 bar complex conjugate times two exponential of minus z1 squared plus z1 bar squared divided by two error function error function of the argument square root of two times modulus of y1 where x and y just remind you is a real and imaginary part of of z always with corresponding indices times times uh signum of y1 okay this is r1c and one can get also related expression for r1 r just by integrating this in in terms of kn which is still unknown and now um so what is the meaning of this the clear meaning of this we know that r1 is just the density of eigenvalues around point z or z1 in the complex plane so these two terms have very clear meaning this is the density of complex eigenvalues around point z1 and this delta term which pins imaginary part to zero just tells us that r1 is the density of purely real eigenvalues situated on on on the real axis so the trick that zomer suggested which i will very briefly describe just um uh how it how it's done it goes to the following uh basically the idea is that one can provide completely or relatively independent method of um evaluating explicitly this function r1 and then comparing with this expression recover the kernel so what is uh the idea idea goes to the following uh observation of edelman so let us uh come back to our genieber matrix g is a matrix uh of real um real entries n by n and then suppose suppose that we know that it has a particular eigenvalue of eigenvalue pair pair of complex conjugate um there is a pair of eigenvalues x plus and minus i y then it's very well known fact of linear algebra that in this case one can represent matrix g in the form which is known as incomplete sure decomposition namely i will write it in the following form i will write it oh explain what is me and then i will write this metric in block in blocks so here will be two by two block in the following form x b minus c x so x on the diagonal and this is x is exactly this real part of the corresponding complex eigenvalue b and minus c are some parameters naturally related to y i have explained it's it's easy just one needs to solve uh these two by two eigenvalue problem and one will find how it's related then here we have some matrix w here we have all zeros just two columns of zeros and here we have a matrix which i denote as g n minus two and finally oh transposed so oh uh is a matrix which is orthogonal equal to identity matrix but it's not general orthogonal matrix it made of two uh of pair of orthogonal eigenvectors in fact and uh can be written in terms of so called of for for those who heard about this uh so called uh household reflections but important they're just a particular case of orthogonal matrix in fact matrices of this form they leave uh they are particular uh some manifold of of all orthogonal matrices it's called stiefel manifold uh and uh why this decomposition is uh is a convenient one okay let me just specify how b and c is related to uh to y first of all we should ensure uh one always can ensure that bc is positive and b is larger than c and finally um y y is just square root of bc and we uh we assume y is positive okay uh so this always there is such a decomposition now how one uses this um this decomposition is basically change of variables from variables n square real variables in g we come to uh two variables just arranged in a different way so with this change of variables uh associated a change in uh integration volume in in in in a in a in a in a measure dg which was element wise measure measure for genie brand sample but now uh with some due effort uh edelman uh demonstrated that it uh one can recalculate it in new variables and it's okay i will write it probably just uh suppress uh various uh constant factors and write only relevant part b minus c then determinant uh determinant of g n minus two this block um this block um minus x uh identity matrix of size n minus two um squared plus y squared identity matrix uh now and now differentials of of of all db dc dx um and now further uh just element wise measure over dg n minus two uh element wise measure product of all independent entries of dw and then measure on this stifle manifold uh spanned by uh by by all so uh now basically if we are interested in um in the distribution or in the density of probability density of x and y we should just basically um make another change of variables from b c to uh to y and one one extra variable delta and then integrate out everything apart from x and y so in this way we will see that basically um um well but of course we should remember that there is uh the measure uh the weight uh the genieber weight minus one half trace of g gt and we should express everything uh this uh weight in terms of uh new variables it's very simple exercise and then since everything is Gaussian and it's possible to integrate out uh explicitly with uh very uh modest effort uh everything apart from x and y uh this could be also useful exercise and then one ends ends up with showing that basically what we called um uh r one r one c one obtains uh as now function of x and y which are a real dimension part of that is proportional with known uh proportionality constant to uh modulus of y exponential y squared minus x squared uh error function of square root of 2 y and then most importantly I write it in the following way determinant uh this basically bit uh g uh minus x identity matrix squared plus y squared identity matrix and and the bracket stand for averaging over measure of okay let me call it g n minus two where g n minus two is basically again uh genieber measure but just for the matrices of size uh n by two uh by n by two rather than n by n so uh you may ask what we uh what we gain but we we gained a lot because because now we can compare this expression with this expression and provided the right efficient ways of evaluating square determinant which is written performing this averaging or uh distribution of uh genieber matrices of size uh of arbitrary size we can read off just what is this what is this k n which more you may see that it's more or less equal to this expectation now the last bit all this uh was known um from uh edelman's paper and zoomers just added very nice observation that uh this determinant uh can be very efficiently uh okay this is basically one can write further this average of the term it's a product of determinant of g uh minus z times determinant of g uh minus z bar this is basically product of two determinants and there is very efficient way of averaging uh products of determinants uh over uh gauss and distributed uh matrices with gauss and distributed entries uh i hope that michel poplowski at least mentioned the method of integration over anticommutant variables or grassman variables uh one can represent each of the determinants as gauss and integral uh using uh vectors with and anticommuting uh variables and then uh in in half a page to get really this average to calculate this average this is uh relatively simple exercise and in this way one obtains uh really uh this kernel so let me now um write explicit expression for this kernel okay so in this way one recovers the kernel and this is uh this formula for it so k n of z one z two equal to z one minus z two divided by two square root of two pi uh exponential of z one z two and now the main uh part of it or at least the most important part it's a function which i will define in the moment it's incomplete gamma function divided by n minus two factorial where factorial um where the function uh incomplete gamma function n a uh divided by n minus one factorial is okay there are several definitions uh one definition is uh just exponential times uh incomplete Taylor series for this exponential or truncated better to say truncated after uh n minus one terms so this is just definition of this function it has uh convenient um convenient uh integral representation which uh can be used to extract various uh useful properties especially asymptotic behavior of this yes this uh so this is explicit expression so not after all this work not that was that difficult not that complicated expression manageable expression and having this in mind having this at our disposal one can then write explicitly formulas for finally for r one and r one density of complex eigenvalues and density of um uh real eigenvalues i just get rid of this part here now we this was auxiliary part just showing how one can get access to this uh kernel yeah no uh just comparing comparing these two expressions you see that it's basically up to some factors it's just just it because it's these are two independent calculations okay so let me write down then explicit formulas for for density of complex and real eigenvalues um it's easier so it's two modulus of y um divided by square root of two pi exponential of two y squared uh error function of this square root of two times or times modulus of y modulus of y and times this incomplete gamma function n minus one okay i will write it x squared plus y squared divided by n minus two factorial okay this is uh the density of complex eigenvalues around the point in complex plane with coordinates x and y or x plus i y if it's in complex plane and uh with some effort uh knowing the kernel and knowing really i mean using this integration that i that um i showed one also recovers density of real eigenvalues depending on x so this the first term is just again given in terms of incomplete gamma function square root of two pi n minus two factorial but also there is another term which most of the time uh at at least for main part is immaterial but i i need to write it down explicitly um integral from zero to x exponential minus u squared over two u to n minus one du something ah exponential of okay uh exponential minus x squared over two okay so some also explicit formula for the density of real eigenvalues and these formulas are valid just remind you for even for any even size of the matrix um honestly i do not remember if if there are some essential modifications i think for for density of real eigenvalues there is some slight modification for for odd uh for odd n uh but uh effectively they are uh not very important okay now we have everything for finite size matrix that we need and uh let us analyze uh the most interesting limit when n n is big and much larger than one here uh we immediately the first fact that we will use is well known uh property of this incomplete gamma function namely that uh limiting value of the incomplete gamma function when it's second argument when it's argument this is a parameter when it's argument scaled with n so i will write it n times a divided by n minus two factorial it has a well defined limit in this case and tending to infinity and limit is one if a is smaller than one uh and uh zero if a is larger than one useful exercise to see uh to see it but most easily for me it's most easily seen by performing by rescaling t with n and performing uh Laplace method of evaluation of this integral and then you will see that for a smaller or larger than one basically uh corresponding stationary point will belong or not belong to the domain of integration and which basically controls its behavior so one or zero okay um in fact uh in fact um or maybe later on i will need also more precise more precise asymptotics in fact what happens uh not when n tends to infinity but when when n is large what replaces this zero really how i need explicit uh evaluation of the order of this term but this when i need it i will just uh recall it it's obtained exactly by the same procedure but just uh not performing n to infinity uh but keeping and find it but large okay so what else we need we also need um okay so this shows that it's meaningful to consider values of x and y or generically value of z rescaled with square root of n in order to have argument of the function um to be of order of n and then if we do such rescaling we also need asymptotic behavior of error function and i will write it for you uh explicit expression for error or definition for this function error it was given last time i won't give it it's just given in terms of some uh integral with gaussian integrant in some limits but using it it's straightforward to get uh that asymptotically when y is much larger than one this is one over square root of two pi one divided by model so y okay y and exponential minus two y squared so now uh we see let us start with uh density of complex eigenvalues uh we see that if we substitute these asymptotics and this expression basically all terms cancel apart from the constant so if you substitute in and calculate this constant you um you will get you will get the following that uh r one uh so density of complex eigenvalues and z when uh n is large it's one over pi where modulus of z is smaller than square root of n and zero otherwise so we see uh we recovered what we expected uh this juniper circle of radius square root of n if you remember my first lecture so uh complex eigenvalues of uh real juniper ensemble of big size they are more or less uniformly filling in okay it's more elliptic than circle but it meant to be a circle of radius square root of n uh and it's just uniformly fill the interior of the circle uh for large enough n uh with the density one over pi so uh what what about what about real eigenvalues um simple study shows that if one scales uh as one uh expected uh to do x with square root of n then this term is sub leading and then we use this and uh relation asymptotic relation and get that r one uh density of real eigenvalues equal to uh one over square root of pi one one over square root of pi if modulus of x smaller than square root of n and zero otherwise so how many then we immediately can conclude how many uh eigenvalues typically we have real we just have them in this interval with constant density one over square root of pi so we have square root of order of square root of n eigenvalues so we see that majority of eigenvalues of real juniper ensemble of order of n they are complex uh but still the reason uh of order of square root of n exactly on the real axis with uniform density this is uh uh precisely inside i mean on this diameter uh inside the circle um when i said that it's uniformly filling in this uh this um uh circle i cheated a little bit i mean or rather it depends how you look at the problem if you take a magnifying glass and look at small vicinity um of order one