 OK, so let's continue. So our program continues with a lecture by Laszlo Erdisch, who will continue his lectures on the matrix dice equation in random matrices. OK, thanks very much. Can you hear me in the back? Microphone OK? OK, very good. So before I come to the mathematics, let me just come back for a sentence to this advertisement that I made about IST yesterday, because several people approached me independently, very distinguished researchers from the audience. I don't want to name them, who asked the question, the most important thing, what about the coffee at IST? They wanted to see the coffee machines. So I asked our PR department, because we do have such things, to show the coffee machines. And there are actually several coffee machines at IST. They are all free, so this is one of them. This is another one. We then have Starbucks on campus, those who prefer that. And for those who, all these things are not sufficient, there's a huge selection of all the specialties of Viennese coffee. These are not exactly at IST, but nearby. You are available, so certain coffees should not be any problem if somebody is interested in IST. OK, so let me come back to where we were. So we just make a little summary of the first lecture. So we did the following. I introduced basic models of Wigner and Wigner type, random matrix models. And I emphasize that the point here is that to study models of increasing generality, by systematically relaxing the basic standard conditions of the Wigner matrices. So remember that Wigner matrices were defined in such a way that the entries are. So we always talk about the large matrix. Let me just put this large matrix here. So we talk about the huge matrix, n by n matrix, h1, 1, hy, 2, h2, 1, and so on, hmn. So talk about such a matrix and the elements are random variables. And the Wigner matrix is characterized by the following properties. The elements are centered, so expectation is 0. They are independent. Independent, of course, up to the Hermitian symmetry. So we always assume, we always consider Hermitian matrices. So independence means that anything above the diagonal are independent. And below that, that means that they're mean. So independent and identically distributed matrix element. So this is the basic definition of the Wigner matrix. And now what I explained last time is that we try to systematically relax all these conditions, all these inputs. So we will remove the condition that the expectation is 0. One can allow for any expectation. One can remove the, maybe let me start here, like one can also remove the identical distribution by taking any possible variance profile. So the matrix elements will not have to be the same distribution. But I still want to keep a condition of the mean field type. So last time it was written in such a way that we introduced the Sij, this blackboard is really horrible, the Sij was the expectation of Hij squared. Let me just do the centered case. Otherwise it would be the variance. And we assume that this is of order 1 over n. So meaning that certain there's always an upper bound of that type. Sometimes for some theorems we need a lower bound. But certainly we want to say that the matrix elements are not too big. So each of them has a size 1 over square root of n. And I explained last time that this kind of normalization ensures that the spectrum is order 1. So we will keep this size like condition, but otherwise the distributions can be arbitrary. So this will decide the identical distributed things. And we can also remove the condition of independence. We can consider correlated matrices when the matrix elements have certain correlations among them. So that's the goal. Now, the other conclusion from last time was that the density profile may become model dependent. I mean, if you study the Wigner matrices, then you are used to the semicircle. And a semicircle is considered as a basic fundamental object in random matrix theory. But actually it turns out that once you start relaxing this condition, then a semicircle disappears. And especially if you start removing the identical distribution condition, then the corresponding density will not be the semicircle. And we have seen pictures last time there are some pictures, for example, for a certain type of matrix whose density, eigenvalue density, is some other curve, some other density function. So the global density may become model dependent. But the other key property, the other key feature, is the local aspect of universality, which is the easiest to see in terms of the universality of gaps. So the distance, so you take two neighboring eigenvalues, you rescale it, you zoom it out, or you rescale it, so that it becomes order one. And then you look at the distribution of that gap, distribution of this neighboring eigenvalues. And that quantity remains universal, even if the density profile is far away from semicircle. So the local aspect of universality is a much more robust feature than the density profile. This is the second message. And the third message from last time was that there is this three-step strategy to prove universality. And the locality is the first of that step, first of these steps. Locality is the only part which is model dependent. So nowadays you have to prove locality and then you can take off existing theorems from the shelf, which eventually proves local universality as well. So we focus on the local laws. And the local law typically looks in the following way. So local law tries to establish, local law establishes that this density profile, the eigenvalue, empirical density of the eigenvalue is approximated by this deterministic density. The local law states that this approximation holds on the smallest possible scale. So it holds on the length scale, which is just a little bit bigger than the eigenvalue spacing. It cannot hold on the level, on the scale of the eigenvalue spacing because on that scale, the density already fluctuates. But right above that scale, it should hold. And that's what the local law says. But actually I also emphasized last time that the local law, the way how we need the local law for applications is more than that. It's more than just an information about the local empirical density of eigenvalues. It's actually an information, it's better to think about it in such a way it's an information about the resolvent. So G will always denote the resolvent, G of Z, H minus Z, and Z is always in the upper half plane. So very often I use this notation that Z is the spectral parameter, which is a real part, imaginary part, eta. And eta is always non-negativity, everything on the upper half plane. So that's the resolvent. We discussed that the key point in understanding resolvent is to understand it for as small eta as possible because that allows you to resolve the local spectrum as much as possible. So you always should think in such a way that, I mean, it may not be written as conditions, but you think in such a way that, that eta is actually a small number. And the goal of the local law in this more general setup is to prove, is to understand this quantity, understand the resolvent. It's a big matrix, an n-biome matrix, a random matrix. But the fact is that the resolvent, let me just write it out here, that the resolvent matrix elements, i, j is the, i's and j's matrix elements of the resolvent, this can be approximated by something, a matrix, a big matrix m, and it's matrix elements i, j. In such a way that this difference for every fixed i and j is smaller in certain sense. I mean, this is not very well defined yet, but it will be small. And the size of that smallness is one over square root of m eta. This is a typical format of the local law. So you see that, you see this feature what I said before that the smaller the eta, you get more information about the resolvent, but then the error bound is also getting worse. In particular, you cannot really get eta below. I mean, it becomes, it stays meaningful only if eta is bigger than one over another. The statement holds also below that, but then the estimate loses its meaning. So this is a typical estimate for the local law. So this is where the local law tells you that information, what the g i, j, the resolvent is. And of course, you have to ask yourself what this m here is, but the important, and there will be an equation for that, but the important point here is that this is random and this is the terministic. So you approximate this is a type of law of large numbers here, m's if you wish. You approximate a random object, the resolvent itself, but it turns out that this resolvent is quite stable. It's close to a deterministic quantity. And then of course, we will have to find out what this deterministic quantity is. And then this inequality is of course understood in a pro-person, so of course, it cannot hold for all realization because on that side you have a randomness and it's a random quantity, and it may happen for some very low probability that the resolvent behaves very, very badly. So this kind of inequalities, later on there will be more precise theorems, should be understood with very high probability. So let me just put it here with high probability. So the precise formula will look like that the probability that this is violated is smaller than some terrible small numbers, smallness in terms of n. So I think that's how far we got. So now let me now discuss what I didn't do last time, the motivation. So let me spend the next 10, 15 minutes on a little bit on physics. Why one wants to study from physical point of view, why one wants to study this kind of random matrices. I mean, for that part of the audience who comes from probability theory, this may not be an relevant question because as I emphasized last time, the question itself, take a large random matrix, try to understand its eigenvalues, this is a very natural question without any physics, but historically that was not the case. Historically everything came from physics via Wigner or actually even before from statistics. And although originally Wigner's motivation was the nuclear physics understanding heavy nuclei, the resonance spectrum of heavy nuclei, it turns out that nowadays there's another physical, there's another area of solid state physics where random matrices became very prominent and this is the disordered quantum system. So actually this part is nowadays from physical point of view random matrices are more used in this business than in nuclear physics. So I would like to explain a little bit about disordered quantum systems and you will see how random matrices emerge in that situation. But then also many other motivations which I'm not going to talk about. So the statistics, of course, the Wischart matrices, Semplekow variance matrices, especially in that business, the traceivity domain is very, very, very important because in statistics you are very much interested in the largest eigenvalue and the second largest eigenvalue while in physics actually in the physical applications it's more like the bulk spectrum which is important. And then there are other more recent applications, wireless communication, channel capacity and also there is some little biology business that are certain models of neural networks that are ODS with random coefficients. This was discussed in Fyodorov's lectures, for example. So there are many other applications of random matrices but because my background is physics, let me explain a little bit about quantum, disordered quantum systems. Okay, so here's on this page, you see quantum mechanics 101. Quantum, so this is the very basic axioms of quantum mechanics. The quantum mechanical system is described by the following ingredients. There's a configuration space called sigma and there is a state space which is the L2 space over sigma. So here is a typical situation. If you want to describe one single spin, just one single spin, then it has two possible states up and down. These are the two configurations of that spin then the sigma will be this set of two elements. And then the state space is the L2 space over all this, over that set. I mean the set is finite and this corresponding L2 space is just C2. Okay, so then the state of the system is described by basically two numbers, the two coordinates of the wave function in L2 of sigma in that case, which describes the quantum occupation probabilities of the spin up and spin down states. Or more relevant for all these considerations is that if you take a solid state model, for example, you can take sigma to be a square lattice in the dimensions or three dimensions, for example. So this you should think of the metallic lattice where the lattice sites or the metallic ions sit in the lattice sites and you would like to describe the electrons and the electrons hop from one side to another. So in that case, the configuration space is sigma, configuration space just tells you an element in the configuration space just tells you at which ion you are as an electron. And then the corresponding state space is the L2 space of this sigma of in that case, is the Z3. So this is just the square summable complex sequences labelled by elements of Z3. Okay, so this is one ingredient, the state spacing quantum mechanics. The other ingredient is the Hamilton operator. So the Hamilton operator is an operator acting on this L2 space. It's always a symmetric or self-adjoint operator, your mission, self-adjoint operator. I put your matrix or operator, these are the same things. Usually you call it matrix. If it's finite dimension as the state space here, the configuration space has finite cardinality and everything is finite dimension otherwise it's infinite dimension. So then you can think of it as either an operator or especially if it's finite dimension case, you can just think of it as sigma times sigma matrix and that's actually our situation. So this H here, the random matrix will be thought of as a Hamiltonian, Hamilton operator or Hamilton matrix of a quantum system. The matrix elements, the intuition behind the matrix and they're described from X to X prime. They describe quantum transition rates from between two sides, between two elements of the configuration space. And the eigenvalues of H, these are always real and these are interpreted as the energy levels of the system. These are basic quantum mechanical axioms and the Hamilton operator is also very important in time evolution, that's probably actually a more important application of that is that if you want to describe how the system evolves, so system evolves means that you start with, the system is in a certain state at initially Psi zero, means that you have Psi zero as a fixed element of this L2 sigma space. And then as time goes on, the state changes. This is the basic equation of motion, that's the Schrodinger equation, which you can actually explicitly solve and this just turns out to be e to the i th times the Psi zero. So the unitary kernel, unitary evolution, a unitary group of H that determines how the state evolves. Okay, so this is basic quantum mechanics. And now, so there were no disorders so far, but of course it becomes immediately disordered if the matrix elements of the Hamiltonian become random and then it becomes a random matrix and of course randomness can have various sources. Typically in physics, randomness comes from some kind of disordered situation. So if the original quantum mechanical system has an additional disorder, which you don't want to describe by first principles, you just want to describe them by statistical means. Okay, so now here is the, so now I focus on the disordered quantum systems and there is a big universal to conjecture for disordered quantum systems, which is not, I'm not going to state it as an absolutely rigorous mathematical conjecture, but there is a big picture behind that which is supposed to hold under very, very general conditions and maybe you have to exclude some pathological examples, but more likely this is correct in very general sense. So it says the following that if you have a disordered quantum system with sufficient amount of complexity, so I'm of course a random, I talk about random matrix, random Hamiltonian, but in principle, that term is the Hamiltonians, also a random Hamiltonian if you wish, but I don't wish that. So in that case it really has to be, has to have sufficient complexity and in that case it exhibits one of the following two behaviors. So it either can be in physics terms, it either can be an insulator or a conductor, so here is here on that page you see the insulator and next page you will see the conductor, so let me explain the insulators. So a system is in the insulator phase and actually this one happens typically when the disorder is very strong. If one of these following occur and actually the general belief is that this comes as a package, so once you're in the insulator phase then all these features happen at the same time and if you're in the other phase the conducting regime then the opposite of all these features happen. There's no rigorous proof of that statement, but that's the general belief. So let me just format it for simplicity in the case when sigma is the ZD, so we have an electron hopping on a rectangle or square lattice in the dimensions. So one of the features of the insulator regime is that you have localized eigenvectors. So what does that mean? If you have an eigenvector psi, the eigenvector psi is a wave function and l2 normalized wave function sitting everywhere. So let me just draw a picture. So here is sigma, which is just the square lattice or maybe if I do it in the dimension and it's a bit easier to draw. And now the psi, the eigenfunction is just values of points, but you can just imagine a little bit like that. On Z, when it's evaluated on a discrete point. And actually it's a complex valued function but I cannot really draw complex function. But I just draw the absolute value square of that where X is in Z. Or you can do it in higher dimensions as well. And if this is an eigenfunction, then the claim is that this eigenfunction is localized in the following sense that most of the mass of the eigenfunction sits in a regime, which is in a region of the total configuration space, which is much smaller than the big configuration space. So in other words, for example, if you have a finite part of Z, I mean here it's formatted for the whole Z, I mean, of course in that case, this is infinite, that's not exactly what it's meant here. It's meant that you take a huge part size of big lattice of size l, for example. And then the eigenfunction may look like that, but it's distributed, it's supported more or less everywhere. That could be sort of one feature. But another feature could be that the eigenfunction is localized and essentially localized in a sense that it's supported in a regime, sigma prime, which is much smaller than the big space. It's not exactly supported because from quantum mechanics, you know that typical eigenfunction is never completely supported, but it has some kind of fast decaying tail, typically it's exponentially decaying, so most of the mass is sitting in a smaller region. So roughly speaking, the localization here, it's formatted more precisely. Another feature of the insulator is that in terms of the resolvent, you see there is the resolvent, the resolvent comes up, and the feature is that you have an exponential of diagonal decay of the resolvent, which means the following, you take an off-diagonal matrix element, x and x prime of the resolvent, so x and x prime are just two points in the lattice. This is x, this is x prime, and you would like to want to know how the resolvent behaves, so it's the off-diagonal matrix element of the resolvent. And this one decays exponentially with a certain length scale, L, which is called the localization length, and this basic parameter, the basic feature, basic parameter of the model. So in other words, this is a physicist formula formulated in such a way that the system has a finite localization length. So then the resolvent has an exponential decay, which is of course quite close to the exponential decay of the corresponding diagonal functions. Another feature of that is the lack of transport. Namely, if we already discussed the evolution of the quantum evolution of the Schrodinger equation, so start from an initial state, for example, start from the delta function, very localized state, the electron sits at one single point, say at zero, and then you let it evolve according to the quantum evolution, you form a psi t, a new wave function, time t, and then you compute the quadratic variation of that wave function. Remember, the psi t square remains a probability density, it's an L2 function, psi zero was L2, normalized in L2, then of course the unitary evolution remains normalized in L2. So if you don't have this x square here, you just sum up the x, you sum up the x psi t x square, then it's of course the L2 norm that remains one, but now you weight it with the x square, so you would like to know how much the wave function expands, and it turns out that, and lack of transport means that even if you take any time, you take run the system for a very long time, still the quadratic variation remains finite. So the constant here is universal independent of time. So that means that the state evolves, I mean it keeps on changing, but it doesn't really leave a compact set, I mean with some fringe, yeah? No, no, no, no, this is, these questions are always finer questions than just eigenvalue questions. So that's why actually, that's also an important point here, that's why we are very much interested, there are many reasons why one is interested in the resolvent, not just the trace of the resolvent. Trace of the resolvent carries an information only about eigenvalues, the resolvent itself carries an information about eigenfunctions, eigenvectors as well, and to decide whether you're in the insulator and the other regime, the localization, the localized regime, you should know much more, you should know eigenvectors as well. Okay, so that's the third criteria or third feature, and the fourth one is that, which is actually closest to our discussion here, maybe which was before in the random matrix community, is that you look at the eigenvalue statistics, so you look at how nearby eigenvalues behave, as you have seen many times, that in random matrix, certainly nearby eigenvalues are not independent, they actually strongly correlated, and there is the Wigner's surmise of the Godin distribution, so there is an explicit distribution which describes how the neighboring eigenvalues behave, they are strongly correlated, in particular there is the phenomenal ever repulsion between them, but if you are in the insulator regime, then you are not a random matrix statistic, then the local eigenvalues are actually independent, and they behave as if it were locally a Poisson point process, okay? So in this sense, there is a characterization of the insulator regime purely in terms of eigenvalues, but there is no direct proof saying that if the eigenvalues are Poisson, then immediately you get all these things, I mean, that these are all these things are very hard CRMC, whenever it could be proven. Okay, this is one regime, remember the basic idea, basic conjecture is that there's a dichotomies, so the dichotomies says that you are either in that regime, in the insulator regime, or you are in the other regime, this is the conductor regime, and here I didn't write up the formulas very precisely, but basically everything is a negation of the previous thing, so you have delocalized eigenvectors, so the eigenvector is not essentially supported in a tiny small set, it's sort of supported more or less everywhere, there is a non-integral decay of the resolvent, so if we call it the resolvent, decay is like the resolvent of the Laplacian, it's a power low decay depending on the dimension, there is transport, so if you look at this quantity, this mean square displacement, then it will grow with T, and actually it grows linearly with T according to the, as if it were a diffusion, so the x square, the distance square, will behave like T, like the time, so there is a diffusive phenomenon, and then the last thing is that the eigenvectors are strongly correlated, in particular there is lower repulsion, and in particular the Wigner-Diesel local statistics emerges, so this is the other regime, and this happens typically at weak disorder, you need some disorder, but if you have a too strong disorder, then it's expected that you are in the localized regime, if you have a weak disorder, but still not completely negligible, then you are in that regime. Now, for those who have not seen this before, let me just mention that this, I mean this is called the Anderson model, especially if you do it on the lattice, and this is one of the main question in mathematical physics, with a different community, but many people in mathematical physics work on random Schrodinger operators, or a special random Schrodinger operator in the Anderson model, and there have been several results, the most important thing is that the localization regime, so this regime is much better understood than the delocalization regime, which is at first sight is a little bit surprising, and actually it's also historically, the whole thing was discovered by Anderson, a physicist, and he predicted the localization, which sounded a surprising thing, and then the other regime, the insulator regime, the weak disorder regime, did not sound so surprising, simply because in the weak disorder regime, you can think of it, it may be wrong, but you can think of it as a perturbative statement, if you have no disorder, then you can see there's some solvable model, for example, the usual aplatial, only the aplatial without any randomness, and you put some randomness in, so you hope that the little randomness, you put a little randomness, and then you can understand it perturbatively. So this regime, to a physicist, this regime doesn't sound surprising, and this other regime sounds more surprising, because if you put in a huge randomness, then suddenly everything what you expect, maybe from perturbation theory, will become, will be wrong, but mathematical is the opposite. Mathematical, it is first regime, the insulator regime is much more accessible, and there have been several papers, groundbreaking result from Fleurich and Spencer, who first proved the existence of this localized regime, and there have been many other works since then, and so this regime, I wouldn't say that everything is proven here in the full generality, but all these features in one way or another have already been proven for various specific models, so to some extent the insulator regime is well understood, and the conducting regime is not at all understood in the random-shading situation, so this is a big open question in mathematical physics. Now, let me come to the more concrete models, because so far we're just talking about general disorder quantum systems. So the first model is the random-shading operator, which is explicitly the Anderson model, so this one, this is basically what I drew before, the corresponding configuration spaces is the lattice ZD, or rather you take a finite, big finite box of that, you can also make periodic boundary conditions, you can make it on the torus, and the picture is the following, so these are the, this is the ionic lattice, and the ionic lattice is not exactly regular, it's originally, it looks like a perfect square lattice, but it's not because there are some thermal fluctuations, some irregularities, this order, so it's not exactly a square lattice, there's some small randomness involved, and the randomness here is indicated by different colors, different gray shades, and then you have an electron, a green electron, which is moving in this environment, so this model describes one single electron in a random environment, in a specifically designed random environment, and the electron moves according to the quantum mechanics, so the picture is misleading, it looks like that the electron sits somewhere, it's a green dot, and then it decides to move from one side to another, but a quantum electron doesn't really do that, a quantum electron is first of all, it's typically not sitting at one side, it's sort of distributed everywhere, this wave function is supported on many different sites, and then what you describe is really the evolution of the wave function, not just the evolution of one single location, but this is how you can think of it. Now, the corresponding Hamilton operator, or Hamilton matrix in that case, looks as follows, I just wrote up the one-dimensional case, this is here, the simplest one-dimensional lattice, and it has two parts, it has a Laplacian part, which here is just the Laplacian of diagonal one-ones, above and below the diagonal, that's the Laplacian part, and then you have a, in the diagonal you have a random part, so the Hamiltonian in that case, just try to tap here in one line, so this is the Laplacian.