 Okay, thank you organizers for inviting me here. It's very pleasant to speak here. I'm going to describe a series of results published in, published partially, published in joint paper. It's a big joint project with Tatiana Sherbina from Princeton University. And the title is, as it was said already, already, local agendale statistic with band matrices, but I'm going to speak about some special approach and this is the second type, transfer-operate approach. So, first, what is a band matrix? The simplest example of the band matrices is the model when we have zero outside to the central 2W plus one and on this diagonal, up to the symmetry condition, you have IID random variables with zero mean and variance one over 2W, so to have in each line in each column the total variance equal to one. And we will be interested in the limit when W tends to infinity and, of course, N tends to infinity and then bigger than W. We do not consider the case when W is proportional to N. So, W to N tends to zero and I would like to say from the very beginning that it is expected that this model has crossover when W square is proportional to N, so it is interesting to consider W square near N square and before less than or more than N square. So, this is the simplest case as I told already. There is a more general definition of the band matrix model then it's not necessary to have zero outside of some diagonal, but if I put some, okay, still IID random variables with zero means, but the variance depends on the distance and it is some scaling in the, sorry, it is some scaling in this distance and here J affects some summable function and, of course, if you put here step-like function then you obtain the previous model and our model, and this is the only the case which we can do for this type of band random matrices, is when the variance is here and it is some problem of the methods which I will apply and here we have expression like this. It's very similar and another one type of band matrix which I'm going to consider is so-called block band matrices and, okay, it's in some sense it's a Wagner model. Wagner model is more general and, okay, here we have in this block, we have N block on the main diagonal N capital is dimensionality and it is proportional to NW. W is dimensionality of each block. We have zero outside of all block. Here is GUE independent with the variance one minus two alpha over W and alpha is a parameter less than one fourth. Here is Geneva type independent identically distributed but not symmetry. Of course there is symmetry condition between this and this because we have Hermitian matrix and, okay, here crossover is expected for W square proportional to N also but since the N is N times W then here we have crossover for W is proportional to N. So this is a definition of two models which I'm going to discuss and now some results about global regime. Global regime we've studied many years ago. The first results was do you, Malchanov-Kharunji passed on 92 and they prove that integrated, okay, joint eigenvalue distribution, the first correlation function was joint eigenvalue distribution converge to semicircle for any rate of W and N. More recent result is a central limit theorem but there is also no crossover. You have the fluctuation of the linear eigenvalue statistic for smooth enough function H converge to the normal random variables after proper normalization. Here we should put this normalization but with this normalization it does not depend on the rate of W square more than N or less than N. It's a difference from the previous result of Soshnikov. Okay, and the main object will be, the main point is spectral correlation function as usually in the random matrix theory the spectral correlation function is just marginal densities of joint eigenvalue distribution. So we take joint eigenvalue distribution and integrate with respect to variables starting from K plus one. And if we study the local regime then we take some, if we study the local regime in the bulk then we take some point E in the bulk. Here is bulk from minus two to two and consider the lambda one and so on lambda K in the neighborhood. And here is a good normalization and study the limit when N tends to infinity for this correlation function. Now, okay, I'll say that the model has GUI statistic if for correlation function I have this determinant representation for the correlation function in the limit. And I will say that the model possess plus one local statistic if the same limit has, after with a good normalization given like this has the limit one. And I'm not going to speak much about localization and delocalization but some results were connected with this fact. And so, and in fact, it is believed widely that, okay, Poisson statistic and Gaussian statistic is there is connection of the statistic with localization and delocalization. That is why I prefer to do definition which I would like to have. And here we take some eigenvectors which of course is random. We take it is normalized and we take the sum like this with S bigger than two. And of course, if we have model like GUI it will be zero because in this case double you to have normalized we have to take Cj equal proportional to one over square root of n. So for S equal to it is one but for bigger is zero. And if this condition is not true then this vector is called localized. And I don't speak about exponential localization differently from the Anderson model but normally if we have localization we have something like this. Okay, and the main point of attraction at least for me in this model is so-called Anderson transition for random band matrices because it is expected that when, okay. The model, excuse me please. Maybe I will come back a little bit. This model of course could be defined for any D not only in D equal one. Then the picture of course will be not the same but you put here D. So this, from the very beginning this model is D dimensional but for D more than one there are only a few results and since I'm going to speak today mainly about D equal one I will not describe them but anyway there is dimensionality. And for conjecture is that for D equal one as I told before there is crossover when W much bigger than then we have GUE statistic when much less local Poisson statistic. For D equal to the crossover is when W square is proportional to logarithm and for D more than three crossover is expected when W is some fixed constant even some fixed constant. And now I will mention some results about D equal one. The first was the result of Fyodorov-Mirlin. It is theoretical physics level of rigor but they show the existence of crossover for W proportional, W square proportional to N and then there are some mathematical results. First zero by Schenker who proof localization for this condition for W. Then the result of Erdersk-Knoll is Erdersk-Knoll is Yawin. Delocalization the first case when W much more than N67 I recall that correct answer W much more than N one half. Here for fifth, okay there is also results of Tatiana Scherbina for Wagner matrices with fixed N and this means that W is proportional to N and there is results of Borg-Gator-Zashev-Yawin of 16. They prove GU statistics for W is proportional to N and the last paper, recent paper by Borg-Gator-Yawin Young-Yin 18 years they prove GU statistics and the localization for this condition but I would like to stress that they consider the random band matrices of much more general type than I. Now a little bit aside is the results of Sasha Sodin. He proved the universality and here crossover is N fifth over six. Now what is the main object which I'm going to discuss is so-called generalized correlation function. Here is a definition, it is just the ratio of determinant and in the case of the first correlation function we have the ratio of two determinant and if we have the second correlation function there are four determinant in game and I would like to say that of course if we take the derivative with respect to parameter for example here that one and then put z prime equal to z then of course we obtain the trace of the resolvent through the first correlation function. That is why and here we should take second derivative. That is why to study standard correlation function in this way is the same as to study this correlation function and this correlation function we take because we use supersymmetric integration, Grasman integration and it is much more convenient when we have determinant because there are formulas for determinants. Okay and if we are going to study local regime again we take e inside the spectrum between minus two and two and here take some perturbation and I would like also to add the correlation function of characteristic polynomial it is not direct connection of characteristic this correlation function with the spectral correlation function but it is in some sense less difficult to study but in other case it demonstrates the same behavior with crossover and so on. So it's very good model just to try the methods and okay now for this correlation function here delta equal to one to one zero one and two corresponds to the correlation function for determinants and okay here first correlation function and the second correlation function anyway we have representation of this form and here k delta are some kernels rather complicated kernels I would like to say the variables which are inside the kernels are in the case of delta equal zero it's just two space variables and unitary matrix in the case delta equal to no unitary matrix to space variables for delta equal two we have four space variables unitary group and also a hyperbolic group here is the definition of hyperbolic group and everywhere dx means integration with respect to x and of course the most important that there is a form like chain of this operator that is why we need special form for variance. Okay and what is the idea of transfer operator approach it's just general property that if you have some integral representation like before then we can consider the compact integral operator with a kernel k of x one and x two and then this integral is the result of the application of this operator to our vectors f and g and using the spectral theorem and using the spectral theorem of course you can write this like this where the coefficient zj are expressed in the term of eigenvectors and lambda j of n minus k is j's eigenvalues of the operator k. Okay the idea looks very attractive but so in the previous slide you can see that if we know spectral property of the operator k then we know our spectral correlation function generalized correlation function but what it is not so beautiful in life like in the picture because there are many difficulties and first of all of course this operator depend on W also and they are not self-adjoint. So if we are studying we have big parameter and there's a nice idea to use perturbation theory but if operator are not so self-adjoint specialist could say that it is not so pleasant to apply it so you need to introduce some and other methods to do this. For me also a big difference that there is integration with respect to this okay not so unitary but hyperbolic group which I don't like so much because okay for example the Laplacian operator or the spectrum of Laplacian operator on this group does not start at zero which is very inconvenient when you start because the intuition is against this. Okay one more difficulty is that in fact this k are not just kernel scalar kernels they are scalar kernels only in the situation of k zero and if we have k one and k two in the case of k two for example your matrix kernel is two eighths times two eighths. Using the symmetry condition you can reduce by 70 to 70 it's better but anyway not so pleasant and also the problem that the zero order of this matrix which is very easy to find contain Jordan type matrix and again to use to apply the theory of perturbation and also very inconvenient to have Jordan cells. And maybe the main problem that our integral representation contained in front of integral for R two for example it is W square. So you cannot take only just zero term of our operator main term. You need to take also first order with respect to W and the second order with respect to W and prove that zero order and first order do not contribute and the first zero order is rather simple it's just a product of four two by two matrices. This is not so difficult but in the situation for the first order it's rather complicated and with this term okay no hope at least for me to compute them directly. Just to use some trick to show that they are of the form which I need. So since we have all these difficulties we did step by step project on the first step with study characteristic polynomials because it is a simplest case but it demonstrates the same transition and here we can do all. So we can prove GUE statistic here we can prove Poisson statistic and we also can study also the regime when W square is proportional it's a recent paper of Tatiana Sherbina. And the case of density of states in this case we have two grassman variables but no unitary here is no grassman variables and here two grassman variables but no unitary or hyperbolic group and here we can show local semicircle there is also sigma model approximation it's I don't want to explain exactly what does it mean but it's some approximation for the second correlation function and physicists believe that it demonstrates the same behavior like a standard correlation function and here we can prove also universality regime it's last year paper and finally the second correlation function so we have unitary group, hyperbolic group for space variables eight grassman variables that is why the matrix is two eight times two eight and we can prove now universality in this form is W much bigger than analog five n sorry here should be five if here is W square and five. Okay, now let me say a few words about our approach. In fact, it's not so convenient to use directly spectral analysis much more attractive again at least for us to use resolvent approach which means that for the integral representation of this this is the first, second or zero correlation function we use the Kashi formula and here is the resolvent of our operator and the idea is okay it's we cannot prove that for example in the case of zero this operator this operator is standard product or something like this which would be very convenient for us but we defined asymptotic equivalent operator when the resolvent is in fact very close on the special contour containing all eigenvalues not if we write here all that of course it will be the same that okay some operator one is almost the same that another here is this resolvent equivalent and using this definition I can explain you the mechanism of this crossover very simple in the example of the crossover for R zero in this case we are able to prove that our operator K zero is just a product tensor product equivalent to the product of two operator and this operator acts on the unitary group and this on the space variables and since we have the same factorization for the F and G then we obtain finally that our function could be written in this form and if we divide by some normalization term we obtain that we need to study only the behavior because this operator does not depend on our XI this one dependence on the XI only in the Z operator so since this term does not depend on site it is the same in numerator and denominator and then we obtain the expression like this and you can study only the operator K star and about K star a good news that it is self-adjoint much more good news that we know all eigenvalues and all eigenvectors of it because it's a difference operator and if we take irreducible representation it commutes with a shift operator and if we take irreducible representation of the shift operator on the unitary groups then it commute with the projects on this operator and okay so we have that our operator have it is independent on this T in the sense that eigenvectors does not depend on T and does not depend also from this it depends only on this form that it is in some sense ratio of U1 by U2 and okay we obtain that this operator is good self-adjoint and we add to this some perturbation and here we have when J is equal to zero of course the first eigenvalue is one and here if J is at least one we have spectral gap proportional to one over W square and perturbation here is multiplication operator by some function so we have self-adjoint operator spectral gap is proportional to one over W square and perturbation is proportional to one over M now if W square is minus two is much bigger than so the spectral gap much bigger than perturbation then of course we will have you can use perturbation theory the zero eigenvalue will be like this and here this product is zero therefore the first eigenvalue is just one and for the second as I told before there is a spectral gap of W square minus two now we obtain that in the limit we have the projection of this to the eigenvalue to the eigenvalue eigenvectors corresponding to this eigenvalue and if we do this correctly we obtain this and here since I told that the first eigenvalue does not depend on psi we obtain one now what is the mechanism for GUI behavior as you now that our spectral gap is much less than the perturbation then it is not difficult to prove that unperturbed operator when psi is equal to n tends to the unit operator in the space which we are interested in and then we have that our operator has a form here some unit operator minus two n minus one and here some another operating it's not so important which one but in from this formula it follows immediately then we have this representation because we just need to put this expression in nth degree and we obtain what we want and then there is a resulting expression which is sine p psi over two p psi it's the same like in GUI case okay and let me say a few words about the case when w square is proportional to n and here if you analyze more carefully what is the behavior of k star of psi then you see that it is still this one and here okay the first perturbation is proportional to n minus one and it's proportional also to Laplacian operator here is the expression of the Laplacian operator on the unitary group and here some multiplication operator so okay after some work you obtain that your correlation function have this form okay now let me formulate the result more precisely so the first one was by Tatyana in 14th and she proved that if n is much less than w square minus theta with arbitrary theta then in the bulk case we obtain this sine kernels the next results was joined in 2017 and we proved and we proved that if n is much bigger than w square than the second correlation function is one so it's plus one statistic and the third one is the case when we have we have proportionally n is proportional to w square and then the answer which I explained above okay now what about the more general model in the case of first correlation function in fact not so many problem is just some way to do some training in this technique because you can here there is no unitary group there is only integration with respect to space variables here the spectral gap is much bigger and because it is all of w square and we obtain that for any n bigger than w but this condition is okay because I said that n over w w over n tends to zero from the very beginning it could not be bigger w could not be bigger but can be proportional so we exclude only the case of proportional and we have universe we have uniform convergence of the first correlation function to the semicircle law with this rate now a few words about sigma model what does it mean in sigma model in fact we need to write our integral representation for the second correlation function and do in this representation some additional limiting transition by the way it's also a new result because physicists just write this model without any explanation why it is the same so we proved that after some special transition we obtain that our correlation function can be written in this form and here it is written in the language of super matrix which means that diagonal block of the first order of the second order with respect to Grasmund variables of diagonal are Grasmund in the odd order so we have these super matrices and integration is with respect to unitary matrix hyperbolic matrix and these four Grasmund variables and again it was the case just to see what will happen in general model because without this I don't I cannot imagine that it is possible to understand what is the structure in the limit for the standard correlation function now what is the result of this sigma model for the second correlation function of this sigma model there is new parameter beta and we expect that crossover is when beta is proportional to n and we obtain this universality conjecture that means that we prove that the second correlation function is for this random band block matrix with here is written for which alpha in the limit then we obtain that there is second correlation function which corresponds to GUI and again here is a block main block and this block is also not so pleasant to study to study convergence to these there are a lot of okay in the very beginning you have much bigger matrix and all coefficient of this matrix depend on here U and S and in the standard model also of space variables now sigma model in some sense legal way to get read from the space variable obtain only dependence on unitary variable and hyperbolic variable so here is the result for for sigma model I would like to stress that it was technical condition that E between square root of W and minus square root of W now we get read from this technical condition so in fact we can now do the same for the whole spectrum because whole spectrum is from minus to true and final results okay in this case we can prove something very similar to the sigma model but in this case we have only six times six only it's then we obtain that for any E in the bulk of the spectrum and if we have this expression for W this restriction for W then the limit of the second correlation function is okay corresponds to the GUE case and I would like to say this work is in preparation in fact I think that we will put it in archive in a week or two and for me maybe it was the most okay complicated and huge paper in my life because structure very complicated and very many places where, okay each point maybe each step is not so difficult but there are huge number of these difficulties and overcome all this and to do the paper readable it was really very hard for us so thank you