 Okay, welcome back. Good morning. Just a brief reminder where we stand. I showed you yesterday how to conclude with a sweet argument involving path expansions, the decay of the fractional moment of the Green's function at high disorder. High disorder is very explicit meaning in that the product of the degree of the graph times a certain constant which we computed explicitly has to be smaller than the disorder parameter to the S. Now, we started to talk about how to get from decay of Green's functions to decay of the eigenfunction correlator. The eigenfunction correlator, remember, was the total variation measure and that is the object which controls transport which when we prove exponential decay of this quantity, we immediately have things like no transport, that the fact that on this interval there's only pure point spectrum and all what we want. Okay, now, so in the first part of the lecture, and we are going to now concentrate on how to prove the exponential decay of the eigenfunction correlator given the exponential decay of the Green's function. Okay, now we went through and we sort of certain properties and I stopped me telling you that this eigenfunction correlator in fact can be viewed as a singular limit. Now, we also summarized that this relation although it's very neat and compact is a bit hard to use because our bounds on the Green's function deteriorate in the limit S tends to 1 and the pre-factor does not help. So, the remedy before that is in fact to consider a family of eigenfunction correlators which will more easily be related to the Green's function and that's what I'm going to start talking about now. Okay, so remember the eigenfunction correlator and with the total variation measure, if you're just dealing with a self-adjoint matrix, it was given in the bioquantity of this form when S is equal to 1. P of E is just the spectral projection onto the eigenvalue E, we're in a finite dimensional Hilbert space and we saw this is just a finite sum. Okay, and the generalization in which I want to consider now is I'm weighting the off-diagonal matrix element of these spectral projections which in case of simple spectrum would be just the product of the normalized eigenfunctions at x and y. I want to weight this in terms of the value of the eigenfunction at x modulus squared because this is in me what the x-x matrix element of this spectral projection boils down to in case of simple spectrum. Okay, now this parameter S, let me take it between 0 and 1 and notice that when I take, as I said, S equal to 1, I get back the original quantity in me. So, this is the quantity which I want to control, it's the endpoint. Now at S equal to 0, this gadget here is a bit unsymmetric in x and y, as you can see from the definition that it reduces just to the diagonal matrix element of the eigenfunction correlators, nothing else in me but the total variation measure of the spectral measure associated with that vertex. Now there are a few of sort of interesting in me properties, actually I wanted to postpone in me some of those properties to the exercise session. I mean they are all, I mean rather elementary, I mean this is a log convex function. What is most important for us to understand and be the proof of this theorem in which I flashed just a minute ago is the fact that once you control the eigenfunction correlator at some value of S, then you control it at the endpoint. Well, remember I mean we want to take expected values of this creature here and just by Cauchy Schwartz this can be upper bounded by the square root of the expected values of the eigenfunction correlators at any intermediate parameter. So the task will be in me to bound the expected value now of this interpolated eigenfunction correlator. Now how do we do this? For this I want to stay in finite volume, remember we talked about this lower semi-continuity of this eigenfunction correlator, so without loss of generality I can always stay in finite volume. And the task will now be in me to relate eigenfunctions with Green's functions. And the basic tool for this is finite rank perturbation theory. The finite rank perturbation theory in me we talked about briefly yesterday it was flashed in this Kein-Fesbach-Schur formula and I want to go into this in me now in a bit more detail. Okay, so the task will be in me to show in me this theorem namely that this interpolated eigenfunction correlator is bounded from above and we buy the expected value of the Green's function raised to the same fractional moment as you take a fractional moment of this eigenfunction correlator integrated over the energy interval you're looking at. Okay and this completes then the proof of the high disorder result. Why? Because you just feed in the Green's function estimate. So let me in me somehow briefly flash this although in me somehow it really doesn't need a big proof. Let me remind you in me that at almost every energy in the Green's function it stays finite even if you move down a bit to the real axis and we had estimates on the expected value of the Green's function of the real axis in which they were uniform in the energy value and therefore the result carries over to an estimate of the eigenfunction correlator. Okay so that this theorem is the heart of proving in the our localization result. Okay so how do you prove this theorem? Now let me take a step back and then recall in me from your basic even linear algebra class and be some basic facts about eigenvectors and the eigenfunctions in the Hilbert space. So if you have a self-adjoint operator remember I mean you when you take a vector there's you know in a finite dimension there's a cyclic subspace spanned by the monomials acting on that vector. Okay so if the dimension of the Hilbert space is finite you may have learned in your linear algebra class in me that the span of these vectors in me they generate the cyclic so-called cyclic subspace of H. Now what is the significance of the cyclic subspace of H in that vector? When you restrict the operator to these subspaces then the spectrum of that operator is simple. Okay so remember in me some other matrices in me you might have degenerate spectrum and you can remove this degeneracy in me by viewing writing the original Hilbert space say c to the n if you're in the linear algebra situation as a direct sum of cyclic subspaces where you have simple spectrum. Okay now this is also true in me in infinite dimensions and if you're dealing with unbounded operators in me it's a bit better to in fact work in me not directly in me with the powers of H I mean they might have a problem acting in me on a specific vector right and we have to address something like a domain something which I ignored and we completely in this talk because all my operators and we were bounded but it's very healthy in me to take just the resolvent acting on that vector and me and take the resolvent and me somehow at an arbitrary value of the spectrum of the self-adjoint operator. Now why am I bringing this up? Because as it turns out it's an easy exercise in me that if you take a self-adjoint operator H0 let's talk about matrices if you take a self-adjoint matrix H0 and you consider a rank one perturbation okay so my notation and before projection on to the vector span on to the vector delta x and me is this physicist notation ket bra in v is just a real number so this is a rank one perturbation of this self-adjoint object H0 that the cyclic subspace of H0 associated with this vector delta x is the same as the cyclic subspace of the perturbed operator associated with the same vector okay so cyclic subspaces and we don't change and be under rank one perturbations. Now in rank one perturbation theory you know that or you may recall and that's the purpose in me of this slide that if you know the spectrum of H0 completely then you can determine everything about the spectrum of the perturbed operator at least when you restrict everything in me to the cyclic subspace generated by the vector associated with the perturbation okay and that is in me the content in me of this well-known theorem of which summarizes in me rank one perturbation theory and let me in me somehow illustrate in me what's what's written there okay remember one of the exercises in me which I asked you to do yesterday uh was to compute in such a situation the resolvent of the perturbed operator Hv when I say resolvent I mean the diagonal matrix element in that specific vector delta x which we're considering here now what was the answer well the answer was in me that as a function of v this depended in the following way on v I mean so there was a v minus an object which I called alpha of z let me call this object in fact sigma naught of z and it's not hard if you go back to your notes to figure out in me that the sigma naught in fact has to be the inverse of the resolvent of the unperturbed operator well see in me if we set v equal to zero if there's any truth in this formula and we then minus sigma naught has to be the inverse of the resolvent uh of H naught okay so sigma naught of z is nothing but minus one over delta x one over H naught minus z delta x okay so what does that entail well it entails I mean that once you know everything about the spectrum of H naught you can read off the spectrum of Hv at least if you restrict and we turn the subspace which you can reach and we buy applications of resolvents to this vector to the cyclic subspace generated by delta x okay and the picture is the following that let's look at me at how this this what this formula and it tells us well see what is the spectrum of H of v it's it's those z's at which the resolvent blows up okay so when does the resolvent blow up well it blows up exactly at the location where v is equal to sigma naught of e right it's those energies and before which sigma naught of e is actually equal to v yeah how does sigma naught of e look like see um if you look at me sort of how sigma naught of e and he looks like that's e and that's sigma naught of e right so what is sigma naught of e well sigma naught of e involves the resolvent of h naught and the resolvent of h naught is just the sum over the spectrum over this eigenvalues of h naught uh of e j naught minus e times the mere weight and the weight is just the normalized eigenfunction at location x squared okay so uh clearly in me this function as a function of e has poles exactly at the eigenvalues now these poles and you will and we then turn into zeros uh of sigma naught of e okay we can write down one not two not three not and so on so let's suppose it's a three by three matrix uh and sigma naught of v has a zero in here okay now this function here as a function of e uh is of this shape so that if i would draw it and you know you have these singularities and we and and things will move and be from minus infinity and b to plus infinity and when you move and be from one eigenvalue and be to the next eigenvalue right so what does that and we entail well it entails and we somehow with it there must be a zero in the of these creatures here so there will be any sort of zeros and at the zero and we somehow of these guys and we sigma naught blows up so basically and with the picture is the following and with that we have a blow up it goes through zero here and so on now you can if you think about and we somehow what the asymptotic value in the of this creature here is uh when you go and be to minus infinity uh this function here and we goes to zero right and it goes to zero and we sort of near now get it right and we have plus infinity and since we we sort of put a minus sign and we we get negative infinity so it goes and we somehow only down there and likewise and if you go to plus infinity uh this this function here and we goes um to negative infinity in the end and we also somehow we go to plus infinity out there so that's a rough sketch of of what sigma naught looks like and the task of finding the eigenfunctions of this perturbed operator is now extremely simple um you take your v that's your level set v and you just read off and be from where this function crosses sigma naught where the new eigenvalues are okay so in this 3 by 3 situation i have my perturbed eigenvalue the third one here the second one here and the first one here okay and note and be something every which you might or might not have learned and be from your rank one perturbation theory which is worth taking away namely that these perturbed eigenvalues always inter um twine every the unperturbed eigenvalues okay so that's somehow I mean one of the basic features of rank one perturbation theory if you've never seen that independent of your interest in random operators that's something worth taking away now we can actually say more we can even construct eigenfunctions of the perturbed operator now and here is the basic message namely the eigenfunctions of the perturbed operator actually turn out to be the greens function of the unperturbed operator okay why is that true well let's see so I had you check and be that this is an eigenfunction and you will I apply hv and be to this function phi e uh and x so hv was h naught h naught minus e acting on v and we just spit out and be the vector delta x okay uh and what remains is uh nothing is but um a plus v times now the vector delta x multiplied by what multiplied and be by the diagonal matrix element of the resolvent of the unperturbed operator now but notice and be that one plus v and be times and be this negative inverse of sigma naught just happens to be zero so I mean that's the proof of the second part of this known theorem okay now but that in me somehow even entails more I mean it entails I mean that you can compute and be the spectral measure uh associated I mean to that vector delta x of the perturbed operator at every eigenvalue right and what you have to do when we wear the spectral measure recall I mean that's your spectral analysis I mean that's just the weight of the resolvent okay and the weight of the result is always given by the normalized eigenvectors now I gave you eigenvectors and I told you I mean that the spectrum is simple okay so there cannot be I mean somehow another eigenvector associated to it okay so that's why uh here is the proof I mean here is I mean nothing to prove I mean that you just need to normalize and be the vector phi e of x in order to get the weight of the spectral measure at this eigenvalue e now notice the following and be that this quantity here actually happens to be the inverse of the derivative of sigma naught okay just take a derivative on sigma naught and you see and be that it agrees and be with that so one way in me to remember me this formula in a just more catchy way in some sense in me is to say that the spectral measure of the perturbed operator okay that is nothing else but a Dirac measure which is generated in me by the locations v I'm going to put a plus or minus minus sigma naught of e okay and that's one way to summarize and be my third point okay everywhere this is just in me somehow in the usual in the Dirac data distribution that is an important observation in me because it gives you a one line clue of a basic theorem in random operator theory and that's the spectral averaging principle now what is the spectral averaging principle where the spectral averaging principle is the observation in me that once you integrate the spectral measure of this rank one perturbation over v then what comes out in me is the big measure right I mean there's nothing to prove here in me because we just already proved this once you integrate over v and be that's it that is the famous spectral averaging principle okay now of course in me this it's it's easy here in me because we're working in a discrete situation it can be generalized in me to continuum operators a lot in me what I'm talking about in me can be generalized in me to continuum operators and it was first and we invented actually by Kottani already in 84 in the situation of one-dimensional random Schrodinger operators and in Simon and we later and we sort of cleared this up in fact it was the key in me to prove pure point spectrum in me before people went the effort to go through the eigenfunction correlator proofs and that was done by Simon and Wolff in in in 86 now it is also the core of something in me which you might have heard and we people talking about and we and that's these famous Wegener estimates for non-resonance estimates I mean I want to pause in me for a second in me to give you a three-line proof of that now what is this well take your finite volume operator so take a finite graph and we wear H and it's just a matrix and ask in yourself and be the following question take a Borel said I and asking me what's the probability in me that in this Borel said there's at least one eigenvalue okay so you can express this and we are saying that the trace of the spectral projection is larger equal than one now you're all probabilists and we so Chebyshev's inequality is not alien to you so let's estimate the probability in terms of the expected value of this quantity now what is the trace the trace of the spectral projection of this operator and you can be evaluated in any basis now let me take my favorite basis associated to the lattice sites this delta x and what you see here is in me this is nothing else in me but the spectral measure associated with x in this Borel said I so just taking the spectral the spectral averaging principle and notice I mean that for each lattice site we have an independent random variable and where we can integrate over and where we can then be somehow apply exactly in me this this corollary here well not exactly I mean you know there's a distribution and one of the reasons why I wanted an absolutely continuous distribution with a bounded density is in some sense and we to formulate and these kind of results right so once I integrate over the rho omega x d omega x of this guy and I can throw out rho omega x and we were the infinity norm and the rest is just an integral of the of the form of this corollary it gives you the length of this Borel set and we end you know if the sum here remains we are untouched and leads to the volume of the graph that's the proof of the famous Wigner estimate okay let me now go back and we after these digressions to address and we how you using and we rank one perturbation theory relates the greens function I mean to this eigenfunction corollator and the core in me is the identity and we would I'm just flashing here and we would just sometimes known under the name resampling principle namely the idea is and we that instead of the original random operator on our finite graph and we were just considering a random operator in which you change the original potential at x to another value and we namely let's call it new okay so you resample and be the potential at this at this latter side that's the idea and this resampling is helpful because the eigenfunction corollator of the original operator is then related to the greens function of the resampled operator well modified by some weights which involve the shift in the potential okay and that's an identity now how do you prove this well you know remember in the eigenfunction corollator here the eigenfunction corollator involves in me the spectral projections associated with x and y in the diagonal ones right but you know we now have formulas and before those guys because we have our rank one perturbation result and we end the rank one perturbation result and we tell you that these spectral projections are one-dimensional okay um at least if you restrict everything to the cyclic subspace generated by x but that's what we're doing and because we're considering everything on vectors delta x okay and we have an explicit formula and before how the corresponding eigenfunction looks like it's the greens function of the unperturbed operator but in our application the unperturbed operator is the operator and with which I want to look at okay okay so um so what you have to notice in me that in the spectral subspace in me which is the cyclic subspace generated by delta x and whether spectral projections are have weight have positive weight these ratios of spectral projections are just given by the ratios of the greens functions that is what rank one perturbation theory told us right but you know in me and and that was at an energy in me where um the unperturbed operators inverse resolvent and we just hits and with a new potential value but if you go on me back and we to this formula and me that is in me the greens function and we end at those energies where e is in the spectrum of h and me the inverse is just given by the inverse of vx minus v that's new yes sorry um the flight type typo okay and then you just plug it in okay you uh just in me somehow plug in me the ratios of me of this this this um these spectral projection in me which you have in the left hand side and me you just plug it in now the mu of x that you can view as a sum over the eigenvalues of h with the weights which are exactly given by the xx matrix elements and me but that is in me the object in the way you're summing over so you take this to the power s right this gives you in me these two factors here and you sum over and we what's in the denominator over e uh and that's encoded in somehow in the spectral project in this integral over the spectral measure associated with x that's the the full proof now let me use this identity now okay so how can I use this identity well let me first condition on all the randomness apart and me from the potential at x so in other words let me just integrate over omega of x because we're in the iid case okay so who depends on omega of x uh well you know the way in me we chose our perturbed operator uh is in such a way that we resampled the potential at x so the operator h nu does not depend any more on omega x right because I subtracted v of x in the in here okay so this this part here is independent of omega of x okay so what depends on omega of x is the potential okay uh and what depends on omega of x is of course the spectral measure associated with x good now how do we integrate over the spectral measure well that was our spectral averaging principle so if we get rid of the pre of the dependence of omega of x here we're in good shape okay because integrating over the spectral measure I mean that's easy that gives us Lebesgue okay aside from somehow the distribution of the of the random potential so let me just I mean somehow um get um right and be this this integral is an integral for omega of x uh and take it with a supremum over I mean somehow all possible values of omega of x I mean this is you now I'm taking here I mean the potential because this is nothing else but omega of x and for convenience and we have just said lambda now equal to one okay so um so this gives us this expression uh and integrating over the spectral measure with the spectral averaging principle gives us Lebesgue measure but that is cool because now we we took just one integration in order to relate the interpolated eigenfunction correlator to an average over a greens function the only problem is now it's not the greens function of the original operator it's the greens function of the operator of the matrix and we were the potential at x is resampled to nu but you know nu is now free it's at our choice uh so we just take nu uh to be distributed in such a way and be that once you integrate over uh nu which sometimes it is v uh in my slides uh then it just gives me the original probability distribution and that's what what what completes the proof so that's the whole trick you just exploit rank one perturbation theory to relate the eigenfunction of the original operator to a greens function of a resampled operator uh you use the spectral averaging principle uh and then you complete uh by averaging over your resampled potential okay it's a very it'll be somehow simple and be step okay now this completes me the first part of this lecture and me because um we now proved that remember I mean some of the transparency which I flashed at the very beginning we now prove two things which I flashed on this transparency namely that if you take lambda very large that there's no pure point spectrum this we proved by means of the rage theorem and by proving first the bound uh on the greens function which was exponentially decaying and then going to the eigenfunction correlator it proves immediately no transport and because the eigenfunction correlator bounds the transport so the next thing which I thought might be interesting to you um to learn is how do you conclude possible level statistics and what is possible level statistics in this regime up here at high disorder and that's somehow the few slides in which we complete with the lecture today where am I now here am I okay so what is possible eigenvalue uh statistics so the question in which we're asking is will we take our favorite anderson model now on my favorite graph and we let me take cd and out of this graph I carve out a box of dimension l and I just consider the operator h this anderson model just restricted to this box okay and um now this is now a matrix and as such it has discrete eigenvalues these eigenvalues are random this is the energy axis and you have discrete eigenvalues and when l tends to infinity of course these eigenvalues um they will accumulate and they will form the spectrum of the anderson model now question what is the typical so this is this is of course a random process I mean what's the typical distance of the eigenvalues okay just just three minutes ago we talked about a Wegener estimate okay remember I mean the Wegener estimate somehow said to me that if you're in finite volume and we are now in finite volume it the average of the number of eigenvalues of hl is what well it's a disbounded by a constant times the interval length so in i sorry just considering if we now an interval here it's bounded I mean by a constant times the interval length times well l to the d because that's the volume okay so let me write I mean I'm going to be consistent with my transparencies and we let me write and be the volume of the box here okay so in other words we the eigenvalues and we they cannot cluster uh closer me then somehow one over as a distance one over the volume on average okay that's what this Wegener estimate tells us but that means I mean if you now take the limit l tends to infinity it's a wise thing I mean to just say if we fix an energy and consider a box not not a box in space I mean but consider in an interval which has the size of several uh multiples of one over the volume okay now in focus on what happens somehow in this energy interval centered at e now how do we do this I mean where we just rescale I mean the original process of eigenvalues which I called en we rescale it and we buy the volume and we then consider the the random point process I mean which is just associated with these rescaled eigenvalues at e and this now and we has a chance of converging in the limit l tends to infinity because the typical distance of eigenvalues is one over the volume okay so let's consider me this random process that's the associated and the random measure we can first when we ask about properties of this of this random process we can first ask about mean intensity and the mean intensity that is the question of which is written down here now it turns out and that's something in which I didn't talk about there's a quantity in which in random operator theory always exists and that's the so-called integrated density of states okay that's a measure let me call it nu associated with a with a Borel set which emerges as the limit of the box size going to infinity of the normalized trace of the spectral projection of these finite volume operators now as it turns out in me you've you may have okay so remember I mean this trace and it's just a sum of me over the volume of these gadgets here so it might not be a we sort of totally alien to you and we suspect and we that this limit and we which is a spatial limit because of the agudicity in me of our underlying process in fact exists and is non-random and is given by in fact to be because of spatial homogeneity by the expected value of such take any lattice side of the spectral projection of the infinite volume operator where X can be chosen arbitrarily okay and that that is a non-random limit and what you prove here and we as you prove and we some some sort of an agotic theorem okay let's take this for granted now one of the first things that we which we can say about this measure about this integrated density of states measure is that because of Wegener's estimate it is absolutely continuous with respect to Lebesgue measure and in fact given in terms of a density and let me call this density also little nu of E and that's physically I mean the so-called density of states which appears oops there's an expected volume missing here I'm very sorry which appears clearly somehow in the limit of this this mean intensity if this thing here converges and be the only chance and if it converges is to converge against the density of states okay now of course you know one has to be a bit careful there's a whole industry asking about regularity properties of this integrated density of states measure and what Wegener's estimate shows to you is that this integrated density of states measure if you define it as such is of course absolutely continuous with a bounded density but that doesn't mean I mean that the density itself is is regular is a continuous function okay so this could be and we still a very choppy function I mean no reasonable person would expect this and in certain situations and everyone can actually prove for me that this is even analytic but in general one doesn't know and we saw in order to sort of write down something like that of course I have to be a bit careful and restrict to the Lebesgue points of the density of states and so I mean that's the the points and we which I defined and we through the usual limiting process and we that once you take in me some of the derivatives of this measure and we this this limit exists okay now what's the big deal in random operator theory the big deal and we which also relates to another topic in me which is favorite here in in in in sake is integrable systems and and chaos and there is a conjecture in this community which is known as the baritabor versus burrigas janu any schmidt conjecture which stefan and we can actually explain to you in me in much more detail it basically says in me that once you take a classically integrable system so an integrable billiard and we and you consider its quantum analog then if you look at the spectrum of this quantum analog and randomize with respect to energy in me then the process in which you will see there is a possible process okay now conversely if we take a billiard and we which is classically chaotic then burrigas januni and schmidt conjectured and we that the quantum analog actually shows level repulsion and even more this the eigenvalue process in which you obtain by randomizing over the energy window is the same as the the eigenvalue process which you know from random matrix theory namely the gaussian orthogonal ensemble and random operator theory in me there's a there's an analog of this conjecture in me where the integrable phase is the localized phase and the non integrable phase and it would be the delocalized phase and the conjecture this so-called spectro statistics conjecture is the following that the spectral characteristics of this infinite volume operator which we took the andersen model in our case is reflected in the behavior of the process of eigenvalues in finite volume and once you go into the regime of localization this eigenvalue process as I've defined it on the previous slide should converge to a Poisson process okay once I would go to the regime of extended states which remember is there in greater than three dimension it's sufficiently small disorder then what you will see in me when you consider this process numerically is you actually see a GOE ensemble okay corresponding to eigenvalue repulsion and if I go and there's about two more complicated random operators which involve magnetic fields I might even see GOE or other ensembles now what I want to do in me now in me is give you a proof of the first part of the spectro statistics conjecture in the random operator setting okay and that is I mean a result which goes back in between NAMI in the 90s which is formulated in the under for the following condition and we suppose I mean we are in the regime where we can prove a fractional moment estimate on the green's function so we do have such an such an estimate I mean because once we look at its large disorder in arbitrary dimension the fractional moment of the green's function dedicated exponentially uniformly in the energy parameter which I'm considering it sort of up in the complex plane and also uniformly in the volume that is what we proved at least at high disorder okay and Minami imitates you that I mean once you go to energies so once you take this experiment and be sort of taken energy zoom into the energy interval of size one over the volume and take the limit l tends to infinity and we then this random process converges to a Poisson process with the intensity given by the density of states that is the the result of Minami in the 90s okay now I think I'm speaking to a probabilist I mean so I'm probably carrying coals to Newcastle but let me never the less and we remind you in me that there's a standard way of identifying a Poisson process so how do you prove Poisson process so what did they teach teach you I mean you're in your probability class well you know one way I mean to prove or to prove that something in me somehow is a is a Poisson process is to prove that it's infinitely divisible that you can identify it in me as a sum of independent and the random processes and they are all negligible so arbitrary many of me but independent and the random processes so what you have to do is you have to identify in me somehow these random instances here in me as being created in me by light rain independent many of them are in all negligible okay that is essentially in me what this theorem says but there's a little footnote in me which which you should also take care of is in me the infinite divisibility is not quite sufficient in me for a Poisson process and you have to prove that that the process doesn't include double points okay so let me read now and be the theorem with you me after having said this so if you have a triangular array of random processes which are all independent and before each n for each and we step in this approximation and you know in me that in the limit n tends to infinity and we they are all negligible in the sense of maybe somehow that once in me you you take a finite window and you ask me is there a point in my finite window then uniformly over all these independent copies in the limit n tends to infinity the answer is no okay now secondly you exclude something happening in me which is not so nice which is double points and me namely that the probability every that in your energy interval i i mean there are more than two points in me that is actually even summed over all of the processes that is going to zero remember in me somehow we are thinking of me of considering the sum of these independent rains which we're creating in me through this triangular array now in the third condition is that well we have to fix the intensity of what comes out and that is in me the middle line here in me it says that remember we basically in me since since the probability in me that more than two summed over everything goes to zero you can view this and be a saying in me that there is somehow one eigenvalue in here and then this clearly somehow represents in me the mean intensity of the process okay and under these conditions that you have independent rain of points they are all negligible no double points occur you fix the intensity then indeed in me the sum of these point measures the conversion distribution to a random point process with a given intensity measure bar that's the reminder now let me give you the physicist proof of the theorem which is in fact also in me which can be made then into a proof so what we remember what what we want to consider is the process of rescaled eigenvalues uh so this mu what is it el this equal to the sum over n delta lambda l enl minus e where these are the eigenvalues of hl okay so how do i approximate this random point measure by something in me which is independent and which would be asymptotically negligible but very easy in me i just take the original box which has a dimension l okay and i cut it into smaller boxes which have a dimension l to the alpha where alpha is smaller than one one but still positive in me so that you know these are not finite boxes in the limit l tends to infinity if they also grow to infinity okay now if i somehow repeat in me this picture so the idea is then in me to approximate in me this given random measure in me by the sum of random measures of all the boxes ml is the number of boxes and each box gives rise and we turn eigenvalue process these are my eigenvalues e and l in the box j okay and the important thing is in me to observe in me that you know if i ask in me somehow where on which scale and we does this process live probably i have to use some other talk okay so if i ask in me somehow for each j that would be in me somehow an eigenvalue process of these e nlj's so on which scale and we does that process live to a small box yes well it's again one of the volume i mean but the volume now is much smaller so that's one over the volume of the smaller box you can't remember in me i want to take in with the smaller boxes large so of size l to the alpha this is l to the a and the full size is a so typically if i consider in me somehow a an an interval of size one over the big volume then in such an interval i can indeed expect that there is just one eigenvalue of the orange process okay that's the idea okay so so indeed in me somehow i mean that is the right direction and me as you can see in me from this little picture now what about in me this approximation sign so what is the danger here but the danger is the following in me that you know in this big box and with their eigen functions of the operator and beyond the big box and with these eigen functions they are localized you know the localization length of the eigen function since the eigen function resembles the greens function the greens function is localized on order one so the localization length here um is just order one okay um that would be an eigen function of hl so typically i mean this eigen function and it doesn't fall onto any of the cut lines which you're introducing by segmenting the box so if you ask in me among and be the eigen functions and with which you're considering in me in this window and we how many we are there and we which are falling somehow which are perturbed and we buy severely perturbed by this process of cutting well it's the eigen functions which live in the vicinity of these cut lines but that's in comparison to the volume of course a negligible fraction okay so in some sense in me i can now go home right at me because i showed you the proof idea because that's it you know if you just have to in me somehow approximate this and you have to you have to um apply and be this standard result and how to how to prove for some process and then we're done well um let me nevertheless and we sort of go through the details and we and there's actually a beautiful estimate and we would you also still have to prove and we would concerns and be the exclusion of double points okay good let me see how much time do i have i think half an hour right yes i think this should work yes okay so there's a to-do list of things in me which i have to go through and be the first part of the to-do list is in me to make sure in me that the object in me which obviously in me somehow fits the description in me of this null array in our abstract theorem indeed approximates in me our original point process okay and um that's the first lemma so if i take the linear dimension of the small box this is a to the alpha with some alpha and we would goes actually smaller than one that's again a typo and if i'm in the localization regime and we then indeed the process and we can be well approximated and we buy the sum of the of processes meaning in me that most eigenfunctions just don't live in the vicinity of the cut lines now how do you put let me show you how do you prove and be sort of in detail and be such an estimate but this is a state that's zero yes sorry okay see okay so that's zero yes so so how do you prove this i mean well you know just take a convenient class and your functions and we would um are dense in l1 i mean what one of the most convenient classes of functions functions which are related to the greens function because the greens function we have an under control now the greens function is the resolvent right and something in me which is dense i mean l1 is the imaginary part of the of the of the resolventish function okay so this you probably better know and be under a Cauchy Lorentz distribution and prove with me that this is equal to zero for g equal to phi of z where z is just somewhere in the upper half plane now let me compute the difference okay all is actually the difference well i did the computation for you as i said i mean somehow when you evaluate the the measure on these functions out pops the resolvent well it's not the resolvent but the imaginary part of the resolvent and you have a sum overall n the sum overall n maybe converted to a trace okay so i mean that's this quantity now let's look at the scaling let's see if i if i take this function phi of z and plug in the rescaled process then all eigenvalues are multiplied by the volume right so what pops out here is the difference of the operator minus e times the volume but since this is one over x i can just pull out the volume and this gives me a z modified by one over the volume okay so that's just i mean the first part and the same argument and we goes for the second part okay where the difference of me of course is in me that this trace is over the whole universe and this trace is just over there just the box j now but the trace and me i cannot i can also compute not only in the eigen basis of the operator but i can also compute the the trace in my favorite eigenbase and we which is associated and we were the lattice sides so this is done here and notice in me since all of the boxes and we sum up to the big box this trace here and we can be thought of as a trace and be over a sum over the smaller boxes and then summed overall boxes okay and then you take the diagonal matrix element of the resolvent and it's imaginary part in me but that's the greens function now at which energy are we evaluating the greens function where we we are evaluating the greens function at energy e where we focus our window plus a bit of possibly i mean imaginary part so complex in the value which will collapse to real to sort of zero when l tends to infinity that's also the reason that we why somehow controlling the difference is not entirely trivial because you need to control like me the resolvent or the greens function near the near the the axis and you need to show it can be that these quantities here can be even when you sum over everything are close to one another why are they close to one another well see let's okay so that reduces them with a problem to one box just fix j and in one box and you can think as follows where most of the summation here over x in the box is just in the interior of the box and there's only part of the vertices and we are on the boundary I see once you take an x in me which is in the interior so an x here that's the interior part of the box lj then that is the second part here in which I wrote up once you take an x which is in the interior the boundary is far away where because we have the liberty since these boxes grow with l we have the liberty of me somehow enlarging in me the boundary layer growing also with l at a smaller rate right and then it's clear at me that that sort of I think intuitively clear I mean but that's something in which I want you actually to prove in the exercises because that's a cool application of localization that once you're in the interior the difference of these greens functions that can be controlled by the distance of interior vertices to the boundary of the box okay so that is where the localization estimate goes in now what are we doing with the boundary layer remember I mean we have a one over the volume in which we can kill and we saw one way and we to deal with the boundary layer is to just forget about cancellations and just estimate it by the number of terms in the worst case the good thing is that the expected value of the imaginary part of the greens function that's a bounded object now that's a bounded object and why well because you know you can just integrate over the random variable at x just take the imaginary part of me of such a quantity this gives a Cauchy integral and that is finite okay because I mean somehow each of these terms here as in its dependence on omega of x I mean that was one of the basic messages of yesterday is of this form and you can integrate over omega of x I mean this will have when z is in the upper half plane I mean this will have a positive imaginary part because it's a Herglotz function as a function of z and therefore the integral of the imaginary part is certainly finite okay so we can throw away in me somehow these all of these terms you just have to count the number of vertices and which are in the boundary layer and you can engineer things in such a way and be that the ratio of the number of terms in the boundary layer to the number of terms in the small box and we of course goes to zero when l tends to infinity so we just neglect it now an exercise which is really in me somehow a cool exercise and we to prove I mean once you want to understand how to work with localization is indeed and me to prove for me that the second part is bounded in terms of the distance to the boundary and in essence I would ask you for the exercise session to prove that that even I mean not if you disregard the imaginary part just take the difference of resolvance over nested boxes then if you're in the interior of the inner box then the decay of this difference is always towards and be the boundary of the interior box okay good so let's finish the proof so we had these three things and here which we needed to check in this abstract theory there was the nullity of the array where that's clear and me that is this picture here as I said I mean that's done by the Wigner estimate if you just take any one of those processes which remember I mean you are scaling it by the wrong volume here not by the natural volume and we but by the wrong volume so you're considering everything and we just in the one over big box environment of the fixed energy and there's just at most a negligible I mean number of eigenvalues how many where that can be estimated by the Wigner estimate okay so it counts and with the number of eigenvalues in our interval and that goes to zero with a number of boxes and then you need to fix the the density that is something in me which I don't want to talk about that's actually also a sort of an easy exercise to prove this from the fact that we are at a Lebesgue point of the density of states and then comes and we somehow in this second refined print and be that possible processes are infinitely visible processes which have no double points so what we need to estimate and we is the probability and be that there are two or more points and we falling into the interval of the little process associated and we were the box J now how do we do this and here and becomes a cool estimate which is known in this community as the Minami estimate in the way I've written it down it entails the following namely that the probability and when you ask for we was the probability and be that they are M or more eigenvalues falling into this rescaled interval that is by a Jäbischeff and be bounded by the product and this product is bounded by something which goes to zero now let me show you and you what's the machinery behind such estimates and that's my last slide it's the following in the extremely cute observation which really goes back to Minami Minami and we had a miracle calculation using rank two perturbation theory but in fact as it turns out here's a five line proof of this same result and be which was later found by Komp-Germini and Klein which I think is so cute and be that it's worth f worth presenting it because it also relates to rank one perturbation theory so and what is the statement well it's a statement of which generalizes Wegener's estimate because it asks so take an operator on a finite graph of the form and be which we looked at namely some operator like the adjacency matrix plus iid random potential and ask what's the probability of me that in our given interval i we have more than n eigenvalues okay claim well for n equal to one I showed to you by Wegener's estimate and we that this is bounded by the volume of the graph times the interval length and we times a constant which involves the distribution now the observation by Minami is that when n is equal to two this probability actually scales as a product so it gives at least as an estimate the bound and be which you would get if if the eigenvalues would be independent okay and if they are n then indeed in the independence bound and we still holds true that is and you what this this theorem tells you okay and using the theorem and we can we can plug it into the abstract result now how do you understand this this theorem well see computing probabilities are allowing probabilities is always hard and we so let's chebyshev it how do you chebyshev it anyway just take the product of those creatures here and we always subtract and we want it subtract j j running from zero to n minus one and n factorial okay now this bound then follows if we can show by induction that this product of factors in expectation is in fact bounded by the product of the volume and the interval length and remember in the case n equal to one that is what we already showed and namely the expected value of the trace is bounded by volume times interval length okay so now let's prove this and be somehow given and be that we've proven it for n concluded for n plus one and that goes as follows okay where's my picture here's my picture okay now so we're at n plus one so the sum ranges now from zero and we turn n and I'm going to separate and with this the factor corresponding to j equal to zero from this product okay now condition on all random variables aside in me from the random variable at side x that's this omega x which we have at this latter side why do we want to do this in me because we can write out in this trace in me which was sitting here and we as a sum in me of diagonal terms my favorite basis now if this nuisance here wouldn't be there we could just use spectral averaging to conclude that this factor here is bounded in terms of the interval length and we would get a volume here okay so we would get an interval length by using some sort of a spectral averaging now of course in me that doesn't work in me because all of this product here contains in a very complicated way and be the dependence on omega of x now but we don't want to compute things and we want to upper bound things and now look at the picture so think about this idea of resampling idea of resampling now what happens if I change the potential value at x in these terms in me to just some other value what happens wait it's a rank one perturbation a problem and you know what happens on the rank one perturbation so you have the original eigenvalues and there would be new eigenvalues but the new eigenvalues are intertwined with the original eigenvalues so once you take in me somehow an interval then and you change the potential of me to some other value the number of eigenvalues in this interval it can only be changed by one okay that's exactly this picture because you know independent of how large v was I mean this eigenvalue in me which is created between the original eigenvalue one or two it can never overtake two even if v is driven where is it even if v is driven in me somehow to to to plus infinity right and likewise in me somehow in me this eigenvalue here it can never it can never in me somehow overtake in me somehow this this value here even if v is driven down to minus infinity so eigenvalues under rank one perturbations the number of eigenvalues is bounded by one one of the other ways to express this is to say under rank one perturbation theory the spectral shift function is bounded by one that's a claim in which I'm sure you've heard if you followed in me of course in me on on operator theory but that's cool in me because it means in me that this product here can be changed at least upper bounded and we buy the product in in which we just increase everything by one right we resample with the whole thing well I should have put here a hat and be or something and be saying that you know this holds true and before the potential at x and we put to some other value and be which I don't care this gives me the opportunity to integrate over omega of x in me the factor in me which is bounded by spectral averaging and which pulls out a factor of the interval length once I condition on omega of x now but then I have in me somehow the resampled value free again and we do the same trick as before in me now let's put it back and we throw the original value and integrate over it and that restores the old problem that's it isn't that beautiful um and that basically concludes and with the the proof of this minami estimate so it's not in fact a ranked two problem as in the original paper of me but it's just uh um based on the observation in me that rank one perturbation theory in taste in me that the spectral shift is bounded by one now let me finish this lecture and be somehow with talking about directions in me which you can go into now um if you would be really interested in pursuing with the subject further now I've I've um talked about localization at extreme values of the disorder strength but if you go back and be to my original phase diagram I claimed in me that also at extreme energies namely near and with the bottom or the top of this almost true spectrum there's localization now how do you prove this well the answer is basically by a proof technique in me similar to what we've seen namely what remains to be proven is an exponential decay of the greens function because the rest of the machinery I showed to you I showed to you how to go from greens function to eigen function correlators so once in me you can prove in me an estimate on the greens function in me you're done and we was a machinery now there's a uh the the original proof of um Anderson localization which mathematicians came up was was was complete localization in one one dimension and in fact and we even this can be done and be from the present machinery now because what you have to show is that the greens function decays exponentially now in one dimension something curious happens to the greens function the greens function actually happens to be a product of greens functions and be which depend which themselves and be sort of form and be a Markov process and you can use techniques from dynamical systems for example and be to control and be these this product and prove and be that things actually decay exponentially now one of the big open problems is to prove complete localization in two dimensions so as I mentioned yesterday already in a private discussion and the the thing if we to prove here in me for probability is in fact a moment Wagner theory uh because what does what does localization entail and be localization basically entails um of decay of correlations uh so if you're in a statistical mechanics set set up and be you want to prove and be that the high temperature phase and we prevails and we down and we turn lambda equal to zero in the the technique in me which you usually and we showed this by in in statistical mechanics is in my moment Wagner type and the techniques and that would be and be very good and be to to get a grasp on okay now any of those proofs and we will not prove sharp bounds and be to the phase boundary in higher dimensions remember and be starting from three dimensions we expect a regime of of transport popping up and if you investigate all of our localization proofs and compare them and be to marry numerical results and be by physicists and be then of course they are not sharp but they can actually be made sharp and we buy a technique in me which is also known in me from statistical mechanics and that that is um goes under the name of finite volume criteria uh essentially the proof which I showed to you and it was a was this idea that remember the decay of the green's function was concluded by saying that in order to go from here to here from x to y I eventually have to leave x and then we do something on the complement of that side now this could be called or is sometimes called a single site criterion and you could imagine that we now renormalizing this proof and arguing well in order to go from x to y I eventually and we have to leave a box maybe of side length l and if you have more information about the about the the properties of the greens function from the center of the box and to the boundary of the box of the full greens function that it might in enable you to concatenate with these these steps and conclude and be from properties in a finite volume properties of the infinite volume greens function okay that's the idea of these finite volume conditions which were developed um around the year 2000 that there's a completely different technique in me which uh I also mentioned which is multi-scale analysis and we which is a bit more robust in me than this fractional moment method it also for example allows you and me to deal with um not only truly random operators and me but operators and with a with a um pseudo and we ran randomness and we as in the almost material or or other we type problems and we of this type and we and that's multi-scale analysis in essence me what you need to show is in the some sort of an estimate of the logic behind it is is the same right and we you try to prove in some way and with the greens function maybe in finite volume maybe from a side and be to the boundary decays but that will not be true for all boxes and be because there will be resonances these are somehow these resonant spots which I talked about in the very beginning in these resonances and we have to be taken care of by a probabilistic estimate in case of random operators and that's usually done by a beginner estimate which as you saw is closely related and be to the to the finiteness of fractional moments now the localization and we is another topic and we which one could talk about um there is something known and we but only on tree graphs and that has been pursued somehow in the um early on we as as the 90s and we end in the complete picture um oh oh I copied sims and we but uh by michael and me and me a few years ago um of course maybe the the real task but that's probably and be the most complicated problem in in this business is to go down and be from infinite dimension which are trees uh to any finite dimension um I think if we somehow if I should give an advice um to youngsters that is probably a much more doable problem um than this one um and you you might risk your career and we by thinking about this one and be too heavily and not getting anything out um but you know surprises are always there now there are various and the other exciting stories and we which go with random system and one of the exciting stories is actually the quantum hall um effect and we where you know and we that once you put an electron and we in a in a perpendicular magnetic field in a two-dimensional structure and we then the so-called uh hall conductivity is quantized and the amazing thing is that this whole conductivity is quantized even despite the fact that when glitzing measured this effect originally he measured it on very dirty samples and we saw on this order structures so this order doesn't destroy this effect but actually it enhances this effect and it's responsible in some way and before the quantization of this whole conductivity um that was a story and we which was also pursued in the 90s and you're very welcome and me to read more and me about what I talked about today in the book and we which Michael Eisenman and I and we somehow authored just two years ago okay so um I think the plan would be in me that I'll hand out an exercise sheet and we to this part of the lecture and tomorrow um I'll go into more current topics as you as you've seen and we somehow everything which I talked about and we was in principle already known um back in the 90s and we saw so the question is what are what are people currently working on uh and that's a bit related and be to this many body localization and me which I want to go into tomorrow