 The first seminar after lunch is by Professor Maria Shirbina, who came all the way from Ukraine. She works in the Institute for Low Temperature Physics in Kharkov. And the title of her talk is Transfer Matrix Approach to One D-Band Matrix. Okay. First of all, I would like to thank Gennar for inviting me here. It's a real pleasure to be here to speak here. It's a very nice summer school, very interesting for me, so thank you once more. What I'm going to discuss is band matrices, and my talk is based on the joint paper with Tatiana Shirbina from Princeton University. It's my daughter, so it's practically a family business. We start from one joint paper, now it's three, and maybe three more will be, so it's rather long project. And I will start from the definition of the band matrix. What is one-dimensional, typical, very simple band matrix? You have Hermitian matrix, and inside the band of the width 2W plus one, you have something, and outside you have zeros. So, if you normalize, as usually there is a symmetric condition, and expectation is zero, and the variance is normalized in such a way that you have the sum of the variance in each line equal to one. And the regime which we go into study is W tends to infinity, and N, which is the size of the matrix, also tends to infinity. And it's very easy to see that this model is some interpolation because the Wigner matrices, when we have the whole complete matrices with non-zero entries, and the matrices of the type of, for example, random Schrodinger, if you have two diagonal, it's something, not exactly, but something similar to random Schrodinger. And I would like also to say that, of course, you could consider the model not only on one-dimensional letters, but also on the dimension letters, because I will go and mainly to discuss what happened for the equal one that is the example is given here. But in principle, it's very interesting what happened for higher dimension. In higher dimension, you should put variance zero outside again because some dimensional band and something inside the band. Okay, the results are more general form. You have to take some function U which integral, total integral equal to one, and you have to rescale your variance in the way written here. So it's more general form of random band matrices. And what I am going really to discuss today mainly is the model which is a little bit different comparing from the classical, the first one model. So I can see the block band matrices where you have only, it composed from n times n blocks. And each block is for the w. And on the diagonal you have GUE and the variance is written. And on the second and minus one diagonal you have something which is symmetric because you need to have a matrix. But in fact it's double your times, double your neighbor. And sample B, independent of each other. So it's not exactly the band which was written in the very beginning, but something called this normally 1D Wagner matrices. But it's not exactly Wagner because Wagner model a little bit different, but something similar. And here is an interesting regime is when n is proportional w square and it is the same than n small is proportional to w. I will explain a little bit later what it is interesting. And now let me mention the... Okay, sorry. It's do, just few. Okay, so results for global regime now is almost trivial. It was obtained many years ago by Haruzhenko-Malchanov pastor. And of course we have semicircle in the limit when w tends to infinity and this regime does not feel the crossover which I will show you a little bit later. So from my today points of view it's not very interesting but it's all result. Okay, and there is also results about central limit theorem for local eigenvalue statistics. They were, from the very beginning, Soshnikov with courses but they have restriction w square greater than n than I remove this restriction, but just to mention it's not so important again for my talk because as you see global regime does not feel any crossover. Okay. Now I would like to speak about the localization length. Not because I am going to discuss something with localization but because the results about bin matrices deals with localization and also the localization help to understand the crossover which occur here. And so what is the localization length? Normally it's a typical length of eigenvectors but if eigenvectors is not zero outside from some field but the exponential is then if it decays like e minus cx where x is component then L written here is one over c. And there are physical conjecture and they are very similar to that for Anderson model that is why I would like to mention that. And the first one for the equal one we have a kind of crossover when localization length become proportional to w square and this happen when w square is proportional to n. This is conjecture again. And it's widely believed that localization means the Poisson statistic and delocalization means the Jewish statistics and what I am going really to discuss is the crossover between local Jewish statistic and local Poisson statistics in d equal to one but there are conjecture of 4d equal to 2. It's written here and 4d more than 3. Delocalization will happen when w more than some constant but no rigorous results about d equal to 2 and 3 and let me mention few results, very few for d equal one. The first one was by Merlin and Fyodorov about the existence of this crossover when w is proportional to n but it's not mathematical at all. I'm not able to understand what is written here, sorry. Then there are some rigorous results in zero night. Schenker found the bound for localization length from the site of localization and there are also some results of Erdesh Yaliyin, Erdesh Knoval's Erdesh Knoval Yaliyin and you see that anyway the localization length the bounds are far from being optimal and there is also results of Tatiana Scherbina that there is GUI statistics for Wigner model with these block matrices when we have fixed n and there is also a result of Borgi Erdesh Yaliyin. They prove that there is GUI statistics when the bandwidth is proportional to n. Now I would like to present you the main objects which I'm going to study. I will call them generalized correlation function and it is just the expectation of ratio of determinant for the first correlation function one determinant in the numerator and one in the denominator and for the second correlation function t two in numerator and two in the denominator and it's very easy to see that link with normal correlation function is going to as follows. If you just differentiate your second correlation function with respect to parameter z which is here then you obtain the traces of the resolvent and standard spectral correlation function can be found from traces of the resolvent as follows. So if you know the behavior of this correlation function when you are very near to the real axis like here then you know the behavior of the second correlation function and I also will consider the correlation function of two determinant maybe because it was the first problem we studied and it is connected with the other one. Okay, it will be r zero, r one and r two which I'm going to study and now if you use some Grassman integration technique there is some rather standard exercise to obtain for this model integral representation for all this correlation function. You can see that in all three cases you have some integral of the function which depends each of two variables and you have something like a cycle of these variables here x could be for example unitary matrices in the case of r zero or r one and it could be pair of matrices which x one is unitary and x two have the form written here and if you can and s becomes a member of the hyperbolic groups here is just to recall characteristic property of the hyperbolic group I will denote by a diagonal matrix with plus minus one on the diagonal so what is the idea of the transfer operator approach if you have, of course it's not our idea it's very, very old observation people use this in many fields of mathematics so if you have some, I will call this matrix kernel for example for simplicity even just normal scalar kernel dependence on two variables so you have integration like this then you can introduce the operator integral operator with this kernel and write your integral like the result of application of n's power of this operator to the vector f multiplied by j bar and if your operator is compact then you can write it's on the language of eigenvalues and again function you can see here it's written like this and it's very useful because you will I hope to convince you that using this property you can obtain the crossover when double your approach and very clear and very simple because it depends only on the spectral distribution of the operatic k so what we have here in fact this k alpha for alpha equals zero one two every time could be written like this where you have, you can see three operators here is some multiplication operator and in the center some operator which we could analyze and this multiplication operator contains one over n in exponent in each component because it could be not just operator but matrix operator and each component of this matrix operator is just operator of multiplication by function which contains one over n in the exponent for n minus one before exponent and as for the central operator we could analyze, normally we could analyze it and we can say that the first eigenvalue is I forgot to say it was written but maybe it was to repeat that I ordered eigenvalues in decreasing order of course for compact operator it's the most natural way to order them and so normally it comes like this for the case of alpha equal to zero that is for the correlation function of determinant we have the gap proportional to W minus one I mean the first spectral gap and the second for the first correlation real spectral correlation function the gap is proportional to some constant and again for the second correlation function the gap is proportional again something which is either W minus one or some square root of something like this and now if you have this information and you use that your transfer operator could be written like a central operator with a small perturbation because this f was almost one operator so one plus something small we obtain the operator of this form and of course if we know the spectral gap of this operator is bigger than one n over n we could say that the spectral the eigenvalues will not change too much so it's not so difficult to obtain for the n's power of eigenvalue the bound and then if you want to compute your correlation function you can see that only the first term will dominate so you obtain something like this and it's just some technique to derive from this property that zero correlation function tends to one the second correlation function factorized into the first one and so on this relation in fact poson statistic poson form of the second correlation function real spectral correlation function and what happen if the spectral gap is less than one over n the main part is not the operator k but the operator f you cannot do with the spectral gap because in this situation the infinite number of the growing with w and n number of eigenvalues will contribute to the sum but if you come back to the definition of r you can see that the eigenvalues are almost one in this situation so you can prove that in the strong vector topology not in the strong operator topology but in strong vector topology this operator come to one and the idea is that you can really could replace k zero here by one and obtain something like this and since f was multiplication operator with one over n exponent no problem to see what will happen here and if you compute this you obtain GUI type expression for r zero and r two so the picture on the language of spectral distribution of the operator k is very clear you need just to analyze what is the first spectral gap for this operator but there are some difficult oh okay before I will speak about the difficulty I would like to say that in fact it's more convenient to work not with the representation through the eigenvalues but with spectral resultant representation of the nth degree of the operator of course you can write it's like a contour integral and contour should contain all eigenvalues and if you know the spectral gap is bigger than one over n you can separate the first eigenvalue and consider the contour here from two the first one around the first eigenvalue zero eigenvalue and the next one like this and here you have that of exponent n and because of this bound this integral will be killed by this part and you obtain that this is something small and this gives you the contribution the same contribution that was in the situation with eigenvalues and for GUI the situation is a little bit GUI case GUI behavior we need to we cannot divide our contour by two parts and just neglect by one of them but in this situation we need to control the resolvent over the contour like this and in this contour that over n written here is just a constant you need not to take it into account you need to analyze the behavior of the resolvent on this contour so this is qualitative picture and now I would like to explain why it's not so simple in fact to obtain rigorous proof of this picture the first problem is that all these kernels are not kernels of self-adjoint operators therefore you cannot use perturbation theory directly and the second for me is very unpleasant technical problem that it contains integration over the groups and the first group is SUN factorized by the product of U1 to product of U1 and this group for me is good because it is a compact group and the second group of hyperbolic group and in this case it is not a compact group our measure for me is not so well understood the eigenvalues of and eigenfunction for example of Laplace operator because our operator I will show you are connected with Laplace operator on this manifold are much less understandable at least for me there is a problem with this analysis of this hyperbolic manifold and maybe the most important is the structural problem the structure of the kernel okay in this first situation when you consider R0 then you have scalar kernel and the picture is relatively simple but for K1 you have in fact matrix kernel and if you consider K2 or R2 it means that you have to study two eighths time two eighths matrix kernel because you use eight grassman variables to derive the formula then you have to consider the two times n space and of course using the symmetry you can just select some block which is smaller 60 times 60 but for me it's again too much so how to that is why because of the structural problem we did not use the methods directly from the very beginning for the second correlation function but try to do something more simple from R0 then for R1 then I will show you model results and only at the end to come to the second correlation function so now I will show you just to have some analogy with statistical mechanics model how does it look the integral representation for R0 you can see this integral which looks like partition function and statistical mechanics here you have just interaction term and here you have some potential and I can see that written like this and this representation is as I told before typical statistical mechanics model here W is in worse temperature you have n particle which vary on some spherical matrices so this representation very familiar for those who studied statistical mechanics and if you replace x, y by some diagonal matrices with unitary groups then you obtain that your kernel becomes it's a little bit involved but here you have the parts which this is the part which depends on differences because u1 times u2 star is just the difference on the unitary groups and I called this difference operator on the unitary group here you have multiplication operator as I told before because here dependence only of x1 and here you have to operate in between which depends on the scalar variables eigenvalues of this matrix and you infect this term is responsible for some concentration because if you study what is this kernel you can see which is the large parameter w in the exponent so this kernel are concentrated near the some stationary point which is very easy to find there is some equation for the stationary point and there is a good news that if you are around the stationary point you can replace you see initially your difference operator depends of x and y but near the stationary point in the main order you can replace it by something independent I don't understand okay with t star instead of this parameter so you obtain a kind of factorization and you can just analyze the eigenvalues of operator separately because you have just to multiply the biggest eigenvalues of this by the biggest eigenvalues of this and the biggest eigenvalues of this so it simplifies considerably the problem of spectral analysis and here are the results the first one was obtained by Tatiana without me three years ago there is a result from delocalization site there is a statistic for the correlation function of determinant and it was obtained with the restriction that n is almost W but here is some polynomial difference as small as we want but it is and I would like to say that now we can recover this result with a method which I explained before it will be simply and the bound will be better because we should put here logarithm and there is also results, joint results from localization site where the idea that all is concentrated about the main eigenvalue was realized and we obtained the Poisson behavior under this restriction and I would like to recall that for us n proportional to W is the same as n proportional to W so there is some threshold when some crossover when n is proportional to W now in the situation of the first correlation function what do we have lucky because there is no there is no integration of a unitary or hyperbolic group but anyway we have some matrix operator and there are kernel f as usually multiplication kernels and here as usually as it was before the operator which are concentrated around some point and here you have some matrix expression and there is a bit news that the matrix near the near the stationary point becomes Jordan cell so for example you cannot never use bounds by norm because the norm will be equal to 2 and the main eigenvalues is equal to 1 in the case of correlation function of 2 determinant we use bound by norm and here it does not work but it can be just because of this we to know how to overcome this difficulty we study this model and there is also results for joint results that under some strange restriction on the spectral point in fact it's pure technical restriction but we will think about how to remove just after now it's not so important much more important to obtain this crossover so what we prove we prove the point why is the first correlation function converge to semicircle and normally for most part of the model it is not possible and here is also optimal bound but result is not so bad but for us it was mainly an exercise how to overcome difficulties with matrix kernel now for the second correlation function we began to write the paper and it was in preparation when Tanya Tatiana met Cernbauer and he asked why we did not do sigma model what is sigma model if you do some scaling in the expression for the second correlation function then after some scaling limit you obtain the model which is very similar to the second correlation function but more simple because in this situation we have the expression again similar to statistical mechanics here super trace super determinant but maybe it is not so important to understand because I am not going to work with this just to show to people who are specialists how does it look