 So we start the last talk of the morning session, and last, we will talk about the edge universality for the non-remission case. Thank you very much for the invitation to the organizers. Can you hear me? The microphone works. OK, very good. So I'm going to report about some joint work with my PhD student, no, the wrong button. Georgia to Poland, my post of Dominic Schroeder, but before I go to that, let me just say one word about, one word advertisement about my place. I'm currently at ISD Austria, which stands for Institute of Science and Technologies, a new research institute near Vienna. It was founded a few years ago. It has mathematics, physics, biology, and many other things, computer science as well. Most important, it's a growing institute. It's a growing institute, so we are permanently looking for, we are permanently looking for people coming to us. So consider it as an advertisement. We do have positions for postdocs. We also have a graduate school, so if anybody is interested in doing PhD, I mean, again, some young people here, then this is a good place to be. So maybe we can check out the web page. OK, so that was the advertisement. Now the talk we have two parts. One of them is when I actually talk about the non-hermission edge universality, which is the title, and in the second, shorter part, I will talk about an important ingredient to this proof, which itself is interesting, and it has a meaning in itself. So let me start a few pictures. So we are looking at random matrices. In this talk, the random matrix X will always be a large n-bio matrix, n is a big number, n goes to infinity, in this talk it's not a Hermitian matrix. That's the novelty here also in the title. Most of the random matrix works so far also in this conference concerned Hermitian random matrices. Traditionally one does that first, but then this is about non-hermitian situation. And we consider the sort of the simplest possible non-hermitian matrix of that type. This is sort of the analog of the standard Wigner matrices for the Hermitian case. So the matrix elements have expectation zero, and they have IID, Independent Identical Distributed, not necessary Gaussian, and I normalize the variance to be one over, and this is the standard normalization. And under this normalization the eigenvalues of the spectrum typically remains order one, as n goes to infinity. Of course the spec is a non-hermitian situation. The spectrum is not on the real line anymore. Actually this is how it looks like. So this is for n equal to 50. Under this normalization the eigenvalues remain essentially confined in the unit disk in the complex plane. And the two pictures here indicate the real and the complex situation, again similar as in the Wigner setup. You can talk about GU, GU, you can talk about complex and real situations. So this picture is the real case, this is the complex case. From far away there is no difference between the real and the complex case, but if you look at it a bit more carefully then you see that you see two things on the real spectrum. First of all it's symmetric to the real axis. This is natural. And also there are eigenvalues on the real line. Typically in the complex situation there are no eigenvalues on the real line. The real line is a one-co-dimensional sub-manifolder of one-co-dimensional. Part of the whole spectrum there is no reason that there is any point lying on that. But the real is different. The real matrix typical has real eigenvalues, at least a few of them. So actually the fact is that typical at the square root of an eigenvalue is lying on the real line. Now you can take a little bit and you can increase the n a little bit. This is 500. You still see the feature. This is symmetric and you see eigenvalues on the real axis. In the complex case this you don't really see. In the complex case there is no symmetry. Actually it's more like there's a rotation as symmetry in distribution. So it doesn't really matter if you multiply every matrix of the x by a complex number of the same complex number of unit length then the whole product is all the same. And this is finite and equal to 2,000 pictures. So the picture is similar to the Wigner case. As n increases you see of course higher and higher density of eigenvalues. And you also see of course the fact that the spectrum is confined, spectrum is converging to something deterministical in that case this is the disk. And also you see that the density this picture is already convincing you see that the density is a central uniform. So unlike in the Wigner situation the Wigner case you have a semi-circular law which means that as you get close to the edge the density decreases as a square root. Here it's not. Here there's a very sharp transition as you go beyond the disk. Up to the disk inside it's density is 1 outside it's 0. So it's a very sharp transition. Of course also you see this you don't see but you can guess immediately that distribution is uniform so that means that the typical eigenvalue spacing between the distance between two neighboring eigenvalues is essentially what you expect. This is 1 over square root of n you have n points in a two-dimensional regime so typical distance is 1 over square root of n. Ok, so now of course all these things what I said have been already proven. This is called the famous circular law which establishes the convergence of the spectral density of the empirical spectrum on the disk. And also it has been proven that there is an accumulation of square root of n eigenvalues in the real axis. Now bit more formally you can formulate a local version of this circular law. You see the circular law naively says that you take a subset of this disk. A fixed subset you count eigenvalues in this subset and you compare it with the limiting value. That's a typical law of large number theorem. Now you can formulate the same theorem the same question in the local sense and that's why it's called a local circular law so that here I didn't have pictures I have a formula. So the formula is the following. You take a test function f you rescale the test function so f is just a nice function compactly supported and so on you rescale it to a scale of n to the minus a. So a is an exponent here it will run between 0 and 1 half and the n to the minus a should be thought of the scale as the scale on which I'm observing the density. So accordingly I have to rescale my observable function by a factor of n to the a and then outside as a corresponding normalization and I do all these things around a fixed point z0 maybe there is a picture later and there is no picture so I take as a point z0 and I zoom out maybe I go back here so I take as point z0 here and I take a little neighborhood of that z0 of size n to the minus a and I count the number of eigenvalues in that interval so this is in that domain so this is that side of the values of the matrix and I take the average local average of this observable this localized observable and the claim is that everything is normalized such that the typical size of the quantity is order 1 and now this is a random quantity and I want to understand what it is so the claim is that there is a low of large number meaning that this random quantity with very high probability converges to a deterministic number of this test function on the corresponding scale and this holds with a very good with a high probability with a good error bound the error bound has of course this a appears there so forget about this side this is just a little correction the important thing is the a here so the precision of this approximation is n to the 2a minus 1 so if a is small for example a is 0 the crudest most robust resolution then the speed of convergence in this low of large number is 1 over n, very good if a is increasing then the speed of convergence the precision the low of large number gets worse and worse as n to the 2a minus 1 and eventually as a becomes 1 half then this becomes an order 1 object so then you are estimating an order 1 object with an order 1 precision but this is natural because once you are down to the scale n to the minus 1 half so a is 1 half then the low of large number stops working because then you are in the situation that you look at such a small zoom such a small such a small microscope then you see a few individual eigenvalues and once you see a few individual eigenvalues then you don't have deterministic limit then you should see the distribution then the fluctuation itself so this is the way to interpret is a low of large number type result and there have been so this is a statement which by now is proven in full generality it is sort of the history behind that so the global low which corresponds to a equals 0 was due to Girko his name will come up later and then tau would be under very optimal moment conditions because the second moment had to exist but this was in the global regime and then the local regime that have been papers first in the bulk regime and then the separate paper at the edge regime the bulk and the edge here refers to the situation and bulk is something when the z0 the reference point is inside and the edge is when you are exactly at the boundary and outside it's not interesting ok and then recently we also generalized all these things to a non-constant variance profile that means that we drop the we drop the identical distribution and we allow that the matrix elements have a non-trivial variance profile it changes the variance changes depending on where you are in the matrix ok now as I already said for a equals 1.5 there are finitely many eigen values which contribute to this sum obviously then we are beyond the low flash number regime and instead you expect some convergence to some universal distribution so the right analogy is a CRT central limit CRM type result and the answer should not be a number should not be a deterministic number as a limit but rather a distribution so now here of course the object what we look at is a point process we have this point zoomed out so what could be what could the limit of a point process be the first guess is that is a Poisson point process that would correspond to the most naive and the simplest situation but this is not a Poisson process it has correlations the eigen values nearby eigen values repel each other exactly the same thing as in the as in the Wigner case so it's not Poisson but then what if it's not Poisson then what else so now here is the corresponding conjecture that's the analog of the Wigner Dyson conjecture by now CRM for the non Hermitian situation so just let me define you the correlation functions it's easier to express in terms of correlation functions also the correlation functions appeared already before in this conference so here is the one way of defining correlation functions is defined in such a way that this is just for any fixed k the k point correlation function is a function of k variables here on that side the p, k, n and it can be defined in such a way that if you take any test function f some nice compactly supportive and some function and you want to compute the expectation of this test function on k all the possible k tuples of eigen values in all possible combination normalize it properly then this sum after expectation this should be given as a linear function of f so it should be given by a function integrating f against some function and that function is the correlation function this is one way to define it so anyway the k point correlation function expresses all relations in case order in the distribution of the over particles so here is the conjecture it is formatted both in the real and the complex situation the final answer differs a little bit depending on which symmetric class you are here you can even do more generalizing you can even fix different base points so you fix z1 through zk base points these are in the center of these red dots and then you take and then you rescale the correlation function accordingly so the k point correlation function is rescale that each variable is shifted to the corresponding z so z here is a whole vector so it is a function of k variable so shift it to the vector z and then you take the variable w which you rescale accordingly with square root of n so this is the right scaling which expresses the fact that you zoom out the picture around the point z, z1, z2 and so on and you zoom it out on a square root of n and the claim is that this correlation functions converge in some weak sense against some test function converge to a universal function, a universal k point correlation function which depends on the symmetric class and it also depends on these base points yes because the scaling is determined by the local density and the density is constant yes, it will come immediately so this just states that there is a universal function but in the next slide this universal function will be given explicitly that's the one what you get from geneibra ensemble, geneibra in this language is the same as the GU with GOE that's the corresponding Gaussian so here is this function that's exactly what you ask that is stating the CRM again, or the conjecture this is a conjecture still and then when you ask what this function could be, of course this function is what you get from the Gaussian calculation that's called the geneibra ensemble even the geneibra is not a trivial calculation it's much harder than the corresponding complex in the corresponding Wigner case but anyway there is the function that is determinantal it's again given by the kernel function and here is the kernel function a little bit more complicated but essentially the important thing is that first of all the kernel function is essentially zero unless this z point the base points are the same so here I formatted the CRM in such a way that I allowed very different base points but really once the base points are far away then it factorizes in a trivial way so really the interesting regime where the base points are atop of each other that sort of expressed by the zero here and also the k-kernel itself feels whether you are at the edge or not so the k-kernel is something very simple it's just basically the e to the Gaussian kernel e to the minus w1 minus w2 square it's a typo here w2 in case when you are in the bulk and at the edge it's a bit more complicated than the error function comes in the squares are missing the squares are missing I just noticed that squares are missing everywhere it's a Gaussian function this is the formula for the complex case now the formula for the real case it's much more complicated it has been found there is an explicit formula it has been found quite recently the statement even requires the whole page in the journal so it's a complicated formula but it's an explicit formula for the real gene in the bulk case so now here is finally our theorem so our theorem is to prove this whole thing at the edge so we can see so the claim is that the non Hermitian university holds in the edge regime which means that all these base points are chosen near the edge here we allow a little bit 1 over square root of n distance but that's the natural scale anyway you can think in such a way that the base points are fixed on the boundary of the disk and the matrix is of course as before so it's an IID random matrix with the usual normalization we can do both cases the complex and the real case as well and the statement is what you expect so the k point correlation functions rescale that appropriately converge where they should converge to the genebrane function even before and there are some technical assumptions that we need high moments of the moments every moments of the x matrix elements have to exist yes that's right yes this is what we proved now and the genebrane you can do point wise that's what you mean ok so so let me emphasize this is at the edge so the back universality is still an open problem that's probably much harder now there is there is only one previous about the whole non Hermitian universality story and this is due to tau vu they did both at the edge and the back but under their moment matching conditions so tau vu has this general theory the form moment matching form moment theorem which basically tell that if you take a random matrix x not gaussian but if you take a random matrix x so that sufficiently many moments match with those of the gaussian then the result is correct so it's in a certain sense it's a perturbative type result it's far from being a real universality result but under this high matching conditions this has been both regimes have been proven but I think this was short of the full universality now here is a table which sort of gives you a little bit overview of how various universality results are done and especially I would like to put it here so let me explain why we could do that and what is the basic idea behind why one could do edge universality in the non Hermitian case so let me explain a little bit the table and let me spend a little time on that because some of you probably have followed various universality universality developments and you may find new and new papers coming up in the archive each of them does a title universality of something and you may get bored of it and you may find each universality differences between them so this table is supposed to make a distinction between them but let me just say immediately if anybody gets offended this table is just it doesn't contain all the results it actually contains only the first relevant paper in each box and there have been many, many other precursors of these results and later on improvements and so on due to many people so don't get upset on the table let me come back to the table so first of all here in the first column shows the various models the first three model the Wigner case corresponds to the Hermitian situation I'm not going to talk about that but on the table it has to be put in for comparison and actually the Hermitian case is the regime which is very well understood you see almost everything is understood here lever is the lines are the non Hermitian case and there is almost nothing is understood this one thing now the other thing is that there is a difference between the bulk, the edge also we know it in the Wigner situation and actually there is a third universality class in the Wigner case that's called the CASP I will come to that immediately so there are three different universalities for the universality regimes for the Wigner case in the non Hermitian case there are two, there is a bulk and there is the edge so this is about the first column now then what you see in these various columns these are methods I'm trying to show you where every column corresponds to a method a fundamental method how people proved universality and how people will also try to prove universality in the future unless you come up with a completely new method which will come as a one more column so the first method the simplest method is the moment method this is when you complain we have seen it also from Alice's talk when you start computing high moments of the random matrix traces of high moments of the random matrix this is a very powerful method for macroscopic scales for density of states and so on and to some extent up to mesoscopic scales as well but typically the moment method stops at some point it's just not strong enough to go to the very final scale the only regime when it works is the edge regime in the Wigner case there's a good reason for that the moment method can work down to the scale and to the minus two-third in the fine scaling in the eigenvalue scaling and the edge of Wigner edge happens to be so it's a good luck that in the Wigner edge the moment method works but one doesn't expect any other questions university questions be solved by moment methods the second column is what I already mentioned as the tau wu method tau wu method is a perturbative method everything works here if you assume sufficiently many moment matching so here the point is that you want to prove that a matrix with arbitrary distribution is sufficiently close in terms of spectral statistics to the corresponding Gaussian case now if you assume sufficiently many moments matching then you are in a good shape and it's still a hard work to do and it's a non-trivial thing but there's the philosophy behind that and in every in each of these paper they assume exactly as many moments matching so that you remain in the perturbative regime so this is that column then comes it's two other columns and there is a difference between two things so both columns do some kind of dynamical approach dynamical approach means that you start with your original matrix you embed this matrix into a stochastic flow typically some kind of Brownian motion some Orstein-Ulenbekom Brownian motion matrix flow whose initial conditions your matrix you run the flow and the end point after a long, long time the end point of the matrix is the Gaussian matrix so the Gaussian matrix in many of these situations has the property that is the equilibrium measure of some stochastic flow so if you start your stochastic flow with your initial object and you run it up to the end when it's a Gaussian and if you can control the whole process then the result from the Gaussian can be pulled back to what you wanted the original non-Gaussian situation so all these last columns use this basic dynamical idea there is a big difference between the two columns whether you do it on short time or for long time so let me say first about the last column that's a bit easier in certain situations you can run this through situations you can run this idea up to basically infinite time so you can start with your matrix as I said before you run the flow the stochastic flow up to infinite time until you reach the full Gaussian situation and then if you that's a long time and then if you can control the process until this long time then you're in a good shape but this is a hard thing to do and you have to control each individual eigenvalue in some cases this is doable and typically it's doable in the edge regime it has been done in the Wigner edge by Lee and Schnelly and this is also the basic reason this is a paper what I'm going to talk about this is also the basic reason why we could do it in the edge regime but we couldn't do it in the bulk regime in the edge regime there is a density gain the density is smaller at the edge and that smaller density means that I can control the evolution this long time stochastic evolution for a longer time now this sounds like a contradiction because you just asked me what is the density at the edge and the density at the edge is the same as in the bulk but here I will come later here I will do it not for the original matrix but I will do it for its hermitized version and then you will see the hermitized version there is a difference between the density at the edge and the bulk so keep in mind that the reason why you can run the Dyson run your motion for a long time is that because you are in an edge regime and finally this is the hardest the most sophisticated technology when you can do this Gaussian when you can do this flow only for a short time because simply you cannot control in an easy way you cannot control it for up to infinite time but you use a short time the short time means that you add you run the Dyson run your motion for a short time you add a small Gaussian component this you can still afford by a perturbative method and then you have to work hard to prove universality for some matrix which has a small Gaussian component and essentially there are two ways to do that depending on whether you are in a complex situation or in general when you are in a complex situation there is a magic identity the Brazzani-Kami identity or itsics on Huber and some formula which allows you to do an integral an explicit integral on the unitary group that allows you to do lots of lots of things but this is this verse only in the complex case so there have been papers which use that and if you are not in the complex regime or you want to have a proof and it works equally well for the real in the complex case then you really have to do Dyson Brownian motion on the level of eigenvalues and all the story behind that but I am not going to talk about that because our result avoids that I just wanted to bring up this table to show you how comes that we prove a universality despite that we are not doing really Dyson Brownian motion eigenvalues because for the non Hermitian case doing the Dyson Brownian motion is very very hard nobody knows that how to do it the corresponding analog of the Dyson Brownian motion for the eigenvalues of the non Hermitian case is unknown it's a big big open question ok so now let me just put some more names here because again this disclaimer, this table contained only techniques for universality about Wigner Hermitian and non Hermitian matrices of course there have been many many more results I wanted to say that so there have been many many more universalities about beta ensembles and sparse matrices and also about eigenvectors these all could have been put into a table separate part but then it wouldn't have fit the whole thing ok so now let me come to the to the cast universality for the Hermitian Wigner type matrix this is this part these two parts here because the cusp is something which is a new phenomenon and it's relevant for our final result let me just devote a page on that so it's a Wigner matrix I have to leave the regime of the Wigner matrix simply because the Wigner matrix doesn't have a cusp the cusp is something is a cusp behavior of the density but the Wigner matrix has only a semicircular there is no cusp there but if I take a Wigner matrix plus a diagonal matrix any diagonal matrix then the density of states becomes something else than the semicircular in particular it can very well happen that the density of states looks like that just drawing something for you so a typical density of states may look like that it has even can have several several intervals it has usual square root singularities but sometimes when two supports two supporting intervals touch then there is a cusp singularity here and this is a new regime the cusp singularity corresponds to a behavior x to the one third unlike the square root at the edge so this is a new regime so this can be achieved if you tune the parameters it's not a typical situation typical matrix doesn't have that but sometimes if you tune parameters it has it how to compute this picture how to compute the density under this situation you have to solve the Dyson equation but this is not it's not a scalar Dyson equation it's a vector Dyson equation it looks like that so you have to solve this nonlinear equation for the unknown vector m m is a bold phase but it really stands for n different objects you have to find the solution which is on the upper half plane and then you take the average of the imaginary parts of this of this coordinates and that will become the density of states it's the shear stress transform of the density of states the inverse shear stress transform you can find the density of states as I said the picture indicates there are only two types of singularities square root edge and cubic root cusp there is no other singularity the cusp is a non-trivial theorem that's such an equation there is no other singularity the cusp regime therefore constitutes the third and also the last universality regime so the first two regimes are the bulk regime the sine kernel the second one is the edge and this is the third one it's a completely different universality regime the scaling is different it's n to the minus three quarter and the corresponding kernel is also very different it's called the pierce kernel unlike Wigner Dyson it's the third name pierce was by the way not a mathematician it was an electrical engineering electrical engineer and it's known that it's in the complex cases the terminant and there's an explicit formula that's called the pierce kernel in the real case there is no explicit formula known it must exist but nobody has found it yet now universality has been proven in the cusp regime in both symmetry classes so these are recent results in the real case was somewhat different now why did I mention that because the cusp local law proving the corresponding local law for this situation is actually a precursor of the nonhermitian edge universality if I use the gear cos hermitization so let me explain to you the gear cos hermitization this is the this is the bread and butter this formula is the starting point of any work in hermitian random matrix theory gear cos figured it out this is an identity which expresses for any test function the linear statistics of the nonhermitian eigenvalue sigma in terms of something which involves only hermitian information namely you take this you double the space so you create this hz matrix which is 2n by 2n matrix you put the x and x star under the 2 diagonal blocks and you subtract the z z is a parameter here with respect to which you will have to integrate so out of one single nonhermitian matrix you create a one parameter one complex parameter family of hermitian matrices hz but this is hermitian so now we are back to the hermitian world which we like and then the formula says that you have to take the trace of the resolvent of this matrix and integrate out the resolvent along the imaginary axis i times eta from 0 to infinity on one hand and you also have to integrate out with respect to this z parameter against the laplation of the test function it's an identity but the merit of this identity that you can forget about nonhermitian situation which anyway is very hard to deal with you have to understand that side so now in case of the iid situation the eigenvalue density of this matrix can be computed it's a little bit more complicated than the semicircular but not much more complicated behind the semicircular there is a quadratic equation 1 over m equals z plus m that's the basic equation the styrtjast transform of the semicircular here the corresponding equation is a cubic equation the density of states of that matrix w is the spectral parameter eventually we plug w equal i times eta and z is this parameter so you see if z is 0 then this term is not there and then you get exactly the usual semicircular but if z is there then you have an extra term so it leads to a cubic equation now the local law also holds local law means that the resolvent becomes deterministic then goes to infinity and it can be well approximated by a function by a matrix which looks like that the resolvent is of course a big matrix 2m by 2n it's approximated by a 2n by 2n matrix which is a block structure so here each of the block these are just diagonal blocks so basically it consists of four diagonal blocks and in the diagonal there on one hand this m which is a solution of this equation in the off diagonal versus a u which is an easy formula obtained from the m so it's explicitly known the important thing is that it's a cubic object now here you see the relation between the two things the non-hermitian eigenvalues in terms of the hermitized matrix so what you see here this is the spectrum of the hermitian situation non-hermitian situation this red dot is the point z the point this reference point hz at the end of the day we have to integrate over all z that's what Girko's formula says and here what you see is the density of states as a function of z actually depends on the absolute value of z and now as you see that I cannot stop it in the other version I could stop it but here I cannot because z is inside then you start from the semicircle and as z goes out the semicircle becomes a cusp cusp behavior and z is exactly at the boundary then you have a cusp so that's the reason why we can do the whole thing because if you are on the edge then after Girko's format corresponds to a cusp regime in the hermitized situation in the cusp regime you have this phenomena the density is small density is essentially vanishing at the cusp similarly at the edge as at the edge and then once your density is small then you can use then this simpler method following the disamblinian motion up to infinite time works that's why we can do this so here is the corresponding local I already advertised it here it's written more explicitly actually this was done in much more generality the pictures the formulas are just for the IID case but we can do it something which is called the general inhomogeneous local law so we don't have to assume IID situation we can assume that the variances are just order 1 over n upper and lower bound and in that case the local law is exactly what you expect as typical there are two types of local laws there is an isotropic version the average version local law is always about comparing the resolvent with the given deterministic object which you can get from solving the Dyson equation and then you either test it or test it against two deterministic vectors or you take the trace of that the trace is of course as an extra averaging so the result is stronger there so the important is exactly what you expect for a typical local law the important thing is that in the entrivized local law the density appears so if you are around the cusp then the density is small so then the isotropic local law becomes more precise and that's the reason why one can do the whole thing ok and then here are some remarks about the proof of that but I don't have much time so let me jump over let me just continue this is just indicating this is a hiring entrivia I think the main reason is that this matrix here I have to use the general theory for this matrix but this matrix is not flat there are big zero blocks and the zero blocks are very bad so one has to do completely separate analysis to do that so now here is the one page the whole proof once we digested all these things so we consider an Orstein-Ulenbeck flow in this way so this Orstein-Ulenbeck flow is designed in such a that the variance is constant it remains 1 over n so it is initial that time equals zero is the original matrix and the time equal infinity is the genebrane matrix genebrane is what you know and now we will try to test the evolution of the local statistics in that case the linear statistics from time zero to infinity here I will just write up the formulas for a fixed time but at the very end of the day I have to integrate all these formulas from time go to zero to infinity but this I'm not going to indicate this is the web so here is the formula Girko formula has a z integral z integral is not so important the eta integral is important one splits the eta integral into two parts the eta integral has the spectral parameter in the G that basically sets the scale so when eta is large then the quantity then this quantity is sensitive to a mesoscopic or even macroscopic statistics of eigenvalues when eta is smaller than it then it starts asking questions on a smaller scale this is how to interpret that now if you run this disombrandian motion flow plus use the local what I had before then it turns out that you can control the time evolution of this object from time zero to infinity in the regime where eta zero is fairly big eta zero n to the minus five six it's actually quite small but it's still not the one over n scale so on larger scale and larger scale here means n to the five six the local low ideals are sufficient if you use this best possible recent local low now in the regime where eta zero is much smaller than one over n that's the regime where you don't expect any eigenvalue because you have to prove it and you have to come up with some argument and it turns out to be a non trivial thing it's about the probability of the small eigenvalues of this of this hermitized matrix which is of course the same thing as the small singular values of the unhermitized matrix X minus Z and then there is a theorem of that type it's a very, very general theorem due to Sankar, Spirman and Teng which excludes small eigenvalues of this type but this theorem is very, very general it's so general that it has to be somewhat weak and the weakness is that it excludes eigenvalues on the scale below one over n so this regime is also ok in that regime there is nothing contributing there and finally there is a gap between the two days so there is some trivial estimate below one over n works some non trivial estimate but still there is something between one and five, six and what to do with that now let me mention that if you allow more matching moments than this first thing can be pushed down then there is no gap that's why they work their method works and that leads to the second part which I don't have time because my time is up for the lowest eigenvalues so let me just spend two minutes to explain the result so this is about the famous Sankar, Spirman, Teng result it's about the singular values of a matrix of the following type it's very, very general you take any matrix A0 literally any matrix and add to that an IID add to that a Zinibre matrix this is a negation result and this sends out the matrix A0 and then there is a says that the A0 plus X does not have a small singular value so the probability that the smallest singular value which is the smallest eigenvalue of A star is smaller than one over n square with a factor X is bounded by square root of X one over n square is the right scale in that situation it is very robust bound now actually this bound is essential it's essential optimal for the real case because if A0 is zero and X is read as the Zinibre situation then Edelman computed the tail distribution of that and it looks like that there is a difference between real and complex case also when you are outside of the spectrum there is no problem if A0 which in our case A0 is just a constant identity if Z is outside of the spectrum then you know that there is no eigenvalue at all that's the trivial case but now we have to do it in the transitional regime can we improve this result which is general so it cannot work cannot be better than it is not sensitive whether you are in the transitional regime and this is our theorem which basically says that Edelman is correct but it says Edelman's result force for Z equals zero we need it for any Z so our result is that for the singular values of a Zinibre plus a Z so a shifted Zinibre matrix and the lowest singular value can be obtained can be bounded in this way now the most important thing in the bound the one minus Z the distance to the origin appears this is this delta and especially if delta is zero which is our real interest if we are really at the edge then you see that instead of 1 over n squared because there is a minimum 1 over n to the 3 half comes in that's the right scaling for the singular value at the edge and then we have this bound this is just a picture so this bound allows you to bridge the gap between these two situations everyone has to work because this result is only for Gaussian so one has to do an additional Green's function comparison to generalize it to the non-Gaussian situation and this will complete the university proof so I don't want to say anything more about the proof the proof is actually a supersymmetric technique I put in some horrible formulas to scare you away but I don't expect you to digest it before the lunch so we have some kind of the key point is the superposition formula which I would like to call your attention to its marvelous identity and which translates the question into some contour integral and then you have to do some complicated contour integral analysis and so so there is a summary so we prove the spectral universality for non-hermation random matrices but only at the edge the bark is an open question and along the way we got an optimal lower tear estimate for the least singular value for the shifted genebran matrices thanks