 I would like to today just to carry on from last time and I think it's a good idea if I remind you of what happened last time briefly so I will do this recap on screen and then I will switch back to the blackboard to do some more details and explain things that we haven't had the day before yesterday okay so just to remind you of the general setting we are studying the random Schrodinger equation so here's the Schrodinger equation with h bar equal to 1 and it's for a single particle and it's in a random environment so in this famous Andersen model the particle moves on a lattice so the kinetic term is the discrete Laplacian and then at every lattice site which are labeled here by this A there is a potential which is well we just take IID random variables vA and the potential is located at x and for the purposes of this talk these are going to be Gaussians I mean the general theorem doesn't require this Gaussianity but anyway it's easier to explain and okay so it was all going to be at lambda times this potential where lambda is very small so one is in the region where one would expect extended states in the Andersen model and I consider three dimensions or at least three dimensions two dimensions as extra singularities which I don't want to discuss and as mentioned there is a version of this in the continuum it's called the Lorentz model so then the kinetic term is really just the ordinary Laplacian I mention this because I will sometimes do estimates just for the continuum case just because it's a lot easier the lattice case is actually quite intricate in some ways when you start doing estimates okay so we then defined the Wigner function standard object which replaces a phase space density in quantum mechanics marginals are the distributions in position and momentum space and then we will scale things of course we will scale time and space so we introduce also scaled Wigner functions and one important thing which we should remember it's very simple if you also Fourier transform in the x variable then because then you just get the product of the size and psi bar and you shift by this extra Fourier parameter here and so if you rescale, if you do this rescaling actually you shift by epsilon times psi over 2 so the scaling limit is such that one is very close to having these two arguments equal and also for that reason sometimes I will just show you the estimate when they are equal and that's a lot easier than just going through the general expressions okay and so the point here is we go to the diffusive timescale which means the inverse here is the diffusive timescale is written very small so lambda to the minus kappa minus 2 is the scaling for time and lambda to the minus 2 minus kappa over 2 is spatial scaling and as I explained last time then if you test it weekly so if you test it with a test function which is on the macroscopic scale then you get convergence to the solution of a heat equation with a particular diffusion coefficient here okay and so now let's remember what we then did in more detail we took the dml formula or equivalently the resolvent formula which is written once more here and we just iterate it and we have discussed this just to remind you for today the nice feature of this formula is actually that you do have this here in H so you can iterate to some number of times and then you can stop the iteration so you can automatically actually one can rearrange things very nicely we will see this when we do renormalization that this is useful and then of course you can write out these terms so there are the explicit terms and the remainder term and the explicit terms only contain H0 and V so all these quantities are known and the remainder term still contains the full unitary time evolution and if we now insert for the V the objects we had here call them VA then obviously this gives rise to a sum over all the different A's and so the interpretation of this is as a collision history right the particle goes to site A1 and then it gets scattered goes to A2, A3 and so on and it's important to keep also in mind that this collision history is determined before we start doing any averages over randomness it's just an independent thing and then I told you that well you can delete this unitary from when you take the norm and so you know that the norm of the remainder is bounded by an explicit term so N-1 times lambda V and then you have an integral over time so you pay basically via factor of T which is of course large and has to be compensated by some estimate on this here and so this is for the norm and by a simple Schwarz inequality one sees that the Wigner function is I mean this application to the observable is continuous in the norm so it's actually useful to estimate these remainders too so I think this was this we also did last time so then we decided to go to momentum space simply because the time evolutions are explicit there just multipliers in principle one wouldn't have to do this but it's of course more convenient here and since I mentioned all these other also many body models with Feynman graph expansions there when you want to do remainder estimates you never stay in momentum space you always do it in position space but then you have to go back and forth to use certain estimates which we will use here too okay so the structure of this formula is basically it's a residue integral you move the pole by some eta into the complex plane you get of course when you do the residue e to the t times eta and since we don't want to have any big factor in front we just always choose eta equals t inverse and so here I still have the potential you can think of this factor as being equal to the random variable times some plane wave in the lattice case we had this too and then the exponentials of e to the i s h 0 get replaced by these propagators so alpha minus e of p plus i eta and actually these are the main objects we will discuss afterwards and okay so then if we put everything together we take the Wigner function which is the product of these two and there is a collision history for psi and one for psi bar so we have different ends in general and some kind of sequence of lattice sites and now we take the expectation value and since we assumed Gaussian this will just give us a pairing since we assumed that this local IID this means that these sequences I mean if you look at the sets they have to be identical if there is an up-down pairing and of course they don't have to be identical if there is a pairing upstairs or downstairs like in this graph so we rewrote this whole thing in terms of graphs and you remember the idea that these lines here really are the factors of the random variables which are still there and the pairing integrates them out and we get this important thing to remember because it's going to be used all the time now is that we have momentum conservation at every vertex so we have this momentum which are integration variables and you can think of this every let's say every value of the integrand as some momentum flow in this diagram and now we will actually start calculating things of course we do you expect the up-down pairing to be special in some sense? I expect what to be special? the up-down pairing yes I will because say at the leading corner you will see anyway the logic is of course these are important because they contain these terms which show up as lost terms for instance the Boltzmann equation the point of today is that I will we will renormalize this expansion and normalize it and these things become sub-leading because this is already included I will explain this and then it's a question of up-down pairings mainly but of course there are also objects which are not you see this pairing here for instance you have to show separately that this is small this is all to come you will see how it goes okay so the idea now is for what we are going to do we are going to look at some graphs especially at the ladder graph because this is going to be the dominant contribution and we will find reasons why other graphs actually have smaller values and if you remember the last thing I did last time it was that we showed that the value of a general up-down graph by a Cauchy-Schwarz estimate is the same it's bounded by the ladder graph at xi equal to zero so this is I think everything we did up to last time and now let me just discuss a little bit so I'll go to the blackboard now and yeah so the essential object is now I call it c alpha is 1 over alpha minus e of k plus i eta and remember eta is 1 over t so that was lambda to the 2 plus kappa essentially I mean constant times that maybe put it here oh I'm not quite awake it yeah oh kappa is positive kappa is a positive number so you can that is the amount of time we go beyond the kinetic timescale right the kinetic timescale is lambda to the minus 2 and we consider times up to the lambda minus 2 minus kappa kappa positive so in the scaling limit it's actually infinite on a kinetic timescale but so far what you've done also works for kappa equal to zero yes exactly yeah yeah right so the point about this propagator representation is if you remember the oscillatory representation of course you couldn't really use absolute values and then you have to you can do integration by parts and stationary phases and everything but if you just want to do bounds by absolute values then you lose all this whereas in this representation you have a generic feature now this of course is a function which is I mean so far alpha and e are real right alpha is just this integration variable so this is almost meaning of course that the sub-norm of c alpha sub-norm means sub-norm in k now is of course 1 over eta so that's t the sub-norm of this is very large but it is located now in the continuous case it's located on a circle with radius alpha and in the lattice case it's of course not just a circle and this may seem sort of a simple detail let me draw it here in two dimensions so the two-dimensional version of this simple of Laplacian is just 1 minus 2 minus cosine k1 minus cosine k2 and the level sets of that look a little bit different right they look like this so we have a periodic momentum space and level sets are in general not convex they are in two dimensions they can even be straight lines if you go to three dimensions you don't have planes but you do have surfaces which are vanishing Gauss curvature and all kinds of things and since we are doing the decay properties of such a thing in position space depend very strongly on this and so just to say the lattice case is a lot it's quite a bit different on the technical level when you start doing estimates it will show up later ok so that is the sup norm and of course the one norm if you look at it integral c alpha of k so it's always the norm k well we integrate this absolute value so it is going to be proportional to log 1 over eta so log t so the one norm is a lot smaller than the sup norm if you calculate the two norm of c alpha well that's more or less something explicit let me just so we can briefly the density of states so ok so this is a simple transformation and do you remember the density of states is just whatever some people call it the co-area formula so it's a density of states for this dispersion function and so if we are here in this case then phi of e is proportional to e to the d over 2 minus 1 so in particular in three dimensions we have a square root of e and the square root will turn up many times and now here phi of e if you really want to calculate it it's a Jacobi elliptic function but that's irrelevant of course it is when you're down here it looks like e to the d half or d over 2 minus 1 but up here you get logarithmic singularities so in two dimensions the phi is not a bounded function but we are going three and higher and then the phi will always be at least a bounded function although its derivatives will be divergent at certain points and recall from the formulas we had we are integrating over alpha so we cannot fix a single level set and so we want to have bounds uniform and alpha so we have to be a little bit careful at this point okay so I was at the two norm and the two norm is proportional to t if you just do the calculation so you whatever integrate by parts and estimate sort of or you can, yeah, no it's like t or square root of t oh it's t, let's just do an upper bound first you take the sup norm of phi times an explicit integral right so pi over t oh yeah right sorry that's the square of the two norm yes yes it goes like square root of t yes somehow the integration is only about the local singularities well that's what matters most and that is also the intuition which one should keep in mind I mean we are integrating over k and the essential region is a vicinity of singularity and it's essentially of order 1 over t so the decay is not relevant well I mean it is relevant in that if you are in the continuum you have to integrate over all of space and then of course the k-square decay is not enough to make it finite but in the continuum case we have this potentials which describe the local random potential and they are assumed to be localized so they provide the decay you have to be a little bit careful but for all the estimates that concern this long time limit is always basically a small shell around this level set which are important okay so and then maybe as an aside of course if you integrate the c alpha over alpha then locally there is a logarithm of eta again and the integral is also divergent at alpha to infinity but you always have enough factors to have a convergent integral so we will not going to worry about large alphas so and basically one can derive from this already power counting for the graphs we had you remember we had an integration of the k variables and an integration of the k tilde variables and we had these delta functions from momentum conservation and that fixed half of the momentum in terms of the other half so one estimate you can do when you have an integral with all these c's is when you have a very complicated momentum as an argument of c you just estimate by the sup norm and then the rest of the integrand factorizes into independent integrals so you immediately get a bound where you have a sup norm on half of them and a two norm on the other half of them and that gives you lambda square t up to logarithms t to the n up to logarithms so that shows that it's an indication that it's maybe good to proceed this way to get finer estimates as well so that would be called ordinary power counting in the field theory now since we have been talking about this let me make a side remark here on the side I had told you in this grand introduction that there are all these other models where this is also which are also treated by Feynman graphs and if I look at the fermionic now in equilibrium I had written down an operator of the type d tau minus e of minus i gradient plus mu so this is the grand canonical ensemble for fermions with a chemical potential mu and if you have an inverse temperature beta then well if you go to Fourier space then this goes to i omega minus d of k minus mu inverse so this would be the propagator in that case omega and k and so you see it has some similarities but it also has some differences and maybe to give a perspective I would like to compare now the two first of all the variable is the dual variable to this variable tau and the Euclidean time runs from 0 to beta so the omega is actually discrete omega is in pi over beta times 2z plus 1 so it's odd multiples of this quantity pi over beta and then basically 1 over beta plays a similar role as the eta so the temperature plays the same role as the finite time of course the properties of this are slightly different so if you first of all you now have two variables omega and k so the omega in contrast to the alpha here the variable that is associated to each line individually so you have a lot more integrations and you can of course calculate for instance this here so this would now be a sum over omega in this set let's call this mf and that actually gives you the function that Mathieu wrote down yesterday to the beta E of k minus mu plus 1 and so when you do the integral over c hat if you integrate the whole thing over k you get what he had I think he called the draw zero of mu so that's a relation to this free Fermi gas which one had before and actually the analysis of many fermion systems at weak coupling goes well Feynman and cluster expansions of this with this propagator now if you look at the one norm of this c hat it's actually infinity because of this slow decay at infinity that can be cured by actually you don't even need to normalize it you just have to rearrange it but let me just say this the two norm of c hat squared is finite and it is proportional to log beta and this is the logarithm of the BCS transition so this logarithmic behavior so it comes out of the two norm and the two norm is the value of a graph that looks like this so there are many similarities between these equilibrium models and the thing we study here there are also some very strong differences I mean this is as I said in the equilibrium there the mu for instance is fixed you always look only at the singularity at a single level set so that's usually called the Fermi surface because it's a surface in three dimensions it's called dimension one in general and of course I mean it's a many particle problems so you don't just have two lines but you have more complicated graphs but there is a certain relation with this there is also an analysis which then goes around this singularity set and simple L1 L infinity bound which I mentioned just without even writing it down here is used a lot in this setting here too but then it's used in a more refined way doing multi-scale analysis and then it becomes very useful you get very fine estimates with that okay so just this as an aside so now let's get back to something in detail let me show you a calculation for the letter graph so now we look at this graph here okay so now remember we feed in K0 and the K0 tilde here and let's just calculate it at the value where these are equal so K0 equal to K0 tilde maybe I'll write it explicitly now we know that momentum conservation holds at every vertex and then this implies that this K1 is the same as the thing down here because whatever comes down here it gets the same K0 back and we get K1 here K2 K2 and so on Kn should be up here so you see the very simple structure of this graph is that it propagates this the K is through and that will not surprise you now that if you consider a single building block of this a single rung of this ladder then you will be able to rewrite it in terms of that and what comes out without writing too much on the board okay so one has e to the 2 eta from this pre-factor lambda to the 2n since every interaction line gives us a factor lambda squared and then we have our d alpha d beta e to the i times beta minus alpha and then we have this product of propagates let me now just right now arrange it in the way when you have identified complex here to the power n minus 1 and then there is a last piece where these the wave functions are attached alpha of K0 c beta K0 bar and so we still have an integral over this and so you see you can just if you can evaluate this simple integral here then you know what the function is so let's see what this gives yeah so it's some calculation and maybe I'm not sure I want to do it in great detail because it's just an elementary thing what comes out here is you the most important thing is that you get something non-zero so you get let me just put real part minus i times something like phi of alpha plus phi of beta over alpha minus alpha minus 2i eta so this is one of the pieces this is the single integral which you get so now I'm just telling you you just plug this in and you use this density of states representation and so then you just have a one-dimensional integral and you just verify that because of the way things arrange themselves in the imaginary part this is actually a plus so there's no cancellation you really get something non-zero and now we have to take this to the power n minus 1 and let me not write out the whole thing but what we get here is then an integral d alpha d beta e to the i times beta minus alpha 1 over beta minus alpha minus 2i eta to the power n so in this case it's n minus 1 because this last integral is saved we save it also to have a convergent alpha integral afterwards and now this and here we have of course let me just put it in phi alpha plus beta to the power n minus 1 and here we have a lambda to n minus 1 so we save one lambda squared back here and so this you do by residues turns out to be lambda squared t to the n minus 1 over n minus 1 vectorial so the bound we had before really saturates here so I'm sorry you said the last integral is safe but there's still alpha and beta so is this oh but with the beta integral you know you integrate in the variable beta minus gamma here you have plenty of decay right so it really is lambda squared t to the n over n factorial if it's if you do the last integral as well so it is really that big and that okay that is the dominant contribution on the kinetic scale but you also see that this is going to be with this it's going to be impossible to make t any larger than 1 over lambda squared because obviously these things are going to blow up and now before we discuss this further we would like to see what happens when we don't have the ladder graph but we do basically the same but we cross the first two and if you cross the first two then of course obviously all the stuff back here remains the same it's just n minus 2 instead of n minus 1 but the crossed piece has a different momentum structure if I have k0 here and here if I go through this block I will have k2 here and here as well this doesn't change if I have k1 here now then I will have something like what is it we oriented this way k2 plus k1 minus k no k0 minus k1 something like this so which means that now now we are looking at this integral c alpha of k1 c beta of q minus k1 bar k1 and the q is still dependent on the other integration variables so it's k2 plus k0 if I'm not mistaken okay and let me state this is a little lemma if you look at such an integral so of course but it doesn't matter k1 minus q and not take absolute value dk1 then this is bounded by constant times yeah and now it turns out eta to the minus b over p plus eta and first of all b is equal to 0 for e of p equals p squared and b is somewhere between 1 half and 3 quarters for lattice triple form of p is p for continuum and on the lattice case it's p the minimum of p minus and then there are these vectors pi hat j so these are these vectors which you'd see here so it would be these so you have some periodicity so you have to worry about these vectors here so again this is a specific lattice feature but let's just think simply think continuum so then this is bounded by a constant times 1 over p plus eta so p in your formula right is it supposed to be the q? yes it's supposed to be the q thank you so what it tells you is that if you compare it to the ladder diagram the ladder diagram I told you is somewhere up there you know it was the two norm of the c alpha squared what did I do here this is terrible okay so this is pi over eta so this is proportional to t so this is very large so basically if we calculate this we get a t here and you see the t is absent you pay for it by a singularity at q equal to 0 and most of the time but you gain a huge factor factor 1 over t is essentially lambda to the 2 at the moment and lambda to the 2 plus kappa later on and so this is what one would call improved power counting or we call it crossing estimate so it's a gain from the indirectness of the collision the collision history has a crossing then you gain a factor of 1 over t and the name of the game is to calculate as much of that as possible that is basically the strategy of this proof okay so so I will not actually calculate this now but obviously it is down by a factor so the question is what happens when I insert let's say 1 over k2 plus k0 in the next integration there so I have to revisit one of these integrals up there and it actually doesn't create a problem because we are in 3 or more dimensions we can handle that but you see that this is a dimension dependent statement so on the other hand when we go through large graphs we have to be very careful that these things don't pile up which could also be possible piling up meaning we have many factors with the same q okay let's look at the time so yeah is there any question up to now then you end up with a classification of these graphs and you would just count the number of crossings essentially that's that there are many many technical details which I will only mention in passing but let me this is what I will discuss in the second half of today still so maybe before I was intended to take a break I'll give you a break let me see where are we I think the thing I should discuss a little bit is just for a second what happens if we actually have this situation here this one can simply of course this low order graph one can simply calculate and maybe let's do the calculation of this graph after the break it will turn out to be again just a non-zero contribution and now let's see what happens when we actually string them together on one side and that means we get whatever the value of this is let's call this let's call it g for the moment and then it depends of course on alpha and on k actually I think we called it theta so let's just look to theta and so if you look at such a graph then we will have some integral over alpha and we will have c alpha of k I mean momentum conservation obviously tells us it's always the same k that goes through here from beginning to the end and on every intermediate line it's also the k same k so what we will get is we will get some c alpha to the k to some power and well it's basically we get if we have n of these we have this times theta alpha and k to the power n and then we get another c alpha of k and again you see this of course if you sum just this subset of graphs you will also get a geometric series and if the theta is not vanish when we are on this singular set here almost singular set then this is going to be large because this is a very high power of a singularity that's basically it will give a similar factor to that one and do you have a question? okay so this is the essence of what we will do after the break is we will re-sum these things except we will not do it with any questionable geometric sum but we will just rearrange our expansion point and then this will end up being it will end up the following way these things will still appear in the new expansion but this is my break signal okay these things will of course still appear but they will appear in a renormalized form so in a subtracted form you get every line that I've shown you here which was one of these is going to be something else now a renormalized propagator and because of the properties of that new propagator the estimates change so the ladders no longer blow up with this new propagator they no longer blow up on that scale but they remain order one on this larger time scale so the idea is now to rearrange the expansion such that these things become harmless, these things become order one on the longer time scale and then these we will start tracking these crossings and we'll do that, let's have a 5 minute break and then continue okay now let's talk about renormalization so this is only the, I mean one is fortunate in this problem due to all the other complications that one in this scaling limit one also only has to do the simplest the so called lowest order renormalization in this filteratic many body case you have to do renormalization of many more terms so we had this these two point diagrams and let me call this so the value of this diagram let me call it theta, eta of alpha and p okay so in our lattice setting this is just dp over alpha minus e of p plus i eta this function is simple enough that you can say about basically all of its properties let me maybe yeah I shouldn't put a p here I should put k here otherwise it's, so as I wrote it it doesn't even depend on p but let me put it b of k minus p absolute squared here which actually I use for doing bounds because it's less complicated you would have a certain shape of your potentials and then this is the Fourier transform of that shape function and yeah simple object and the most important thing is that its imaginary part is negative yeah and so let me just list a few properties again some little lemma so we have theta epsilon of alpha and p minus theta epsilon prime of alpha prime and p is bounded by a constant times epsilon to the minus epsilon minus epsilon prime plus alpha minus alpha prime and this is for epsilon bigger than epsilon prime yes I mean basically if one writes it out magically all diagrams are large can I continue for 15 minutes and maybe then you have the big picture I'm sorry I start so technically I just wanted to list the properties we want for this function so I mean this means that it's well it's held a continuous in this variables but it's in general and especially in three dimensions you have to be a little bit careful when you differentiate it in particular when we do a Taylor expansion we actually won't get the full power as you will see but only a fractional power so since I want to tell you the truth I wanted to mention that and then one can just calculate it and let me just say imaginary part of theta of p is less than zero and let me put it this way and this is bounded by some constant times the minimum of p to the d minus 2 and p inverse so at large p it goes to zero as well and okay so this is really not this is elementary stuff right so let's just think of theta A of alpha and p and integrate here again over the E's with the density of states and then we have 1 over alpha minus E plus I eta and then we have the average of the B hat over the level set E and okay so then you remember if you take limit eta to zero then you get a standard distributional identity you get a principal value for the real part and you get delta for the imaginary part so in particular for the imaginary part when you take eta to zero you just get focus E on alpha so if you take imaginary part theta eta alpha beta and take eta to zero then you get something like phi of alpha maybe times pi times this average and the phi of alpha as you know is proportional to alpha to the d over 2 minus 1 and then you get this factor p to the d minus 2 here so why it now depends on beta wasn't supposed to depend on beta I'm sorry on p as before I don't know what I write so sorry and so the right and left doesn't depend on p well it depends on p via this okay so it's a function with a negative imaginary part and you of course it's order lambda squared so the important thing to remember is that we have lambda squared times theta and to say it right away of course we have this thing still up there on the first board this is going to be bigger of course not everywhere because at p equal to zero again you get something that vanishes but most of the for most values of p except the vicinity of zero this thing is going to be much bigger than that and now we rearrange the expansion to make use of this so we do a rearrangement in the following way we define omega of p e of p plus lambda squared theta of e of p and p so this is now just add a function of p to e of p and now we define our Hamiltonian h0 well h0 is if you want h0 tilde psi part of p is just supposed to be multiplication by omega of p and we take this as an expansion point so we now write our h which is still h0 plus v we write it as h0 plus tilde plus v tilde and so what we add to the h0 to get h0 tilde we will subtract from here so the v tilde is v minus this let me just write it lambda squared theta and now we repeat the whole expansion we didn't use well we used some properties of h0 before I mean h0 was supposed to be self-adjoint of course so that we don't get any problems but you see that the the imaginary part of this contribution has the correct sign it will make e to the i s h0 a bounded operator e to the minus i s h0 because the imaginary part we add is also negative it's most easily seen in the alpha representation so now we put alpha minus omega of p plus i eta so now we have replaced the e by the omega and if you just think of this then this is basically the alpha minus e of p minus real part of omega and then you have plus i eta minus imaginary part of theta times lambda squared and since this is a negative quantity this is actually positive we make the imaginary part for most p imaginary part significantly bigger and that is the reason why then on the larger scale when we now do all diagrams with this bigger imaginary part down here things will still work you probably said this and I missed it so when you say you renormalize this you do for positive eta so there's a theta sub eta yeah this is supposed to be theta of alpha p so it's the limit so you take the limit and you renormalize with a limit not with a finite eta but you see that's why I wrote down this technical looking lemma I mean you have enough regularity to bound the difference here okay so now repeat the expansion so this is my and I still call it c alpha p sorry for the abuse of notation but we repeat the expansion with the new c and of course the structure of the expansion doesn't change obviously the question is what have we gained and now it's just the fact that if you calculate the ladder diagrams with this propagator so we repeat this calculation which I did before that you see that it's order one on this long time scale this is not a short calculation but it is true I don't think I will have time to do that calculation so it turns out that the ladder diagrams order one on that scale and have it here somewhere to write down at least can you say on that scale it's on the lambda minus two or the minus two minus scale let me write down what one has so if you take this integral lambda squared pp over alpha minus omega p plus r plus i eta beta minus omega p minus r and you get complex minus i eta so no absolute values here and you take the soup over the alpha beta and r that's minus one less than constant lambda to the one minus copper so this would be a single rung of the ladder and now this integral becomes one now I should say how this renormalization really works when we expanded for this unfortunately I have to go back now to the original expansion and I don't really want to write it on the board but just let me remind you that we have this diagram with all these a1, a2, a3 and to an and now if we expand again we will again after pairing produce such terms but you know that now we're expanding in v tilde so this is a sum over a of these phi a times whatever v of x minus a and then we still have this one term lambda squared theta so in every sum there is another index so we sum over z d union a particular index so we have now an insertion of lambda squared times theta in every order of the expansion so if you think of having a collision history of this type and you go to an plus one then there are two possibilities either an plus one so these are now labels in z d well if it is equal to an then you know you must get a pairing here in one of the terms after taking the expectation value but on the other hand we did have the lambda squared theta term here also in the nth order of the expansion so we keep these two together if an plus one is equal to an so we automatically can group them in the iteration of the Duhamel expansion so that we actually get something where this now is minus with alpha equals E of p because that's what this rule says we localize the theta alpha equals E of p and that means so essentially we now have something of the type c alpha of p and then we have the value of the gate theta eta of alpha and p minus theta of E of p and p and then comes the next c alpha of p and so we by this arrangement we get a subtraction directly in this term and then of course you do a Taylor expansion and the Taylor expansion gains us well it doesn't gain us a full power because we only have this estimate up here but it gains us half a power so effectively one gets a t to the minus one half because of this cancellation so not only are these terms not large anymore they actually get small suppressed in the limit and so that is essentially a bigger picture is this big enough for you I'm just thinking back of my head so it's a kind of a cancellation coming from the new propagator and the new propagator is a one loop correction yes it has a would call it a Hartree-Fock correction which is becoming such diagrams plus tadpoles and tadpoles don't appear in this theory but yeah it's the simplest type of self-energy renormalization you can have and it's sufficient for that time scale now so far I have only well I've claimed this statement about the ladder shown you how one renormalizes but now one has to go through the entire expansion again and one actually has to look at these crossing estimates because we still have n factorial diagrams at every order n and so the procedure is now one defines a number of rules for this expansion so let me do the graph estimates first and then to talk about the rules for the expansion because the rules for the expansion are not completely simple so let me just see okay so I've drawn here one of these graphs so I've not written so it's okay 0, k1, k2 and so on and you see we have some kind of pairing here and it's a pure up-down pairing for the reasons that I just discussed before and now we want to know what its value is so first of all we associated the permutation to this graph and the permutation is well I mean obviously you just look I mean let's keep the convention that we are take the index on the left so if this is one so this goes to three and three goes back to one and so on so this is just the graph of the permutation drawn upside down and now there is the Kirchhoff rule for the momenta let me see if I can show this here so for instance we have this momentum number one here and if we want to know which of the factors so remember here we have the alphas and here we have the beta which of the beta propagators depend on this where we just have to draw the loop around here and read off that and one depend on this k and similarly if we for instance look at number five it goes this way flows this way and back and we know that the whole row between three and six has to be depend on k five so that's just a momentum conservation rules and very nicely codified in this matrix here and you see that this matrix just means whenever you have such a column here let me just try to show this again when you have this column here then you have these entries here so you can identify this at this point so you get a matrix for the number two actually if you look at the two if you go from here it goes against the direction so you get some minus ones as well here and that is basically the first organizing principles you don't look at graphs anymore because obviously it gets too complicated you just look at this oh I see this is barely visible okay well anyway I hope it's clear so you organize this in terms of permutations and the associated matrices here which tell you which momenta depend on what and that now allows us to easily write down the graph amplitude so here is the amplitude of this graph so we have all this stuff in the first line which you don't really have to look at that sort of comes from this representation beta alpha integrals and the stuff that sits with the size and on the other side with the observable and here is the core of the graph between 1 and 7 with these propagators C alpha and C beta and so here is still the delta pi that was the product of delta functions and now we have worked out these delta functions and we get all these propagators here so the KJs are the integration variables and the beta propagators get their momenta fixed exactly according to this rule here or according to this matrix so you can read this matrix if you want to know what K tilde 3 is you just sum up the momenta with these signs here and you get this lengthy looking expression somewhere here this one should be it okay and now one important thing I have to mention is I showed you this indirectness estimate it must be here somewhere I think I erased it but I can probably get it back here anyway you know this integral where we had a shift by momentum Q where it actually was much smaller than when there was a total overlap we have this 1 over Q now you could think of doing this estimate with more than one factor of extra factor of C say you have 5 propagators and you see in general here in this expression things do I mean of course lots of these C's depend on any particular KJ but if you start doing estimates with more and more of these propagators you lose track of the singularity which you get attached to this improvement it is on some hypersurface which at least we don't know how to characterize so we want to stick to having only one C beta in each of these sub integrals where we use the improvement and obviously this is not in this form yet and so that's where the whole algorithm for doing this now becomes interesting because you have to decide what to do to separate these K's and well you remember that you can still you can still do some of them by their sub norm you get a factor of T but okay you get small factors too and this is what's done then so I don't have time to go into the details basically the algorithm is you remove those you estimate those propagators by a sub norm which are above any of these peaks and then prove that it actually works when you work from the bottom up from these valleys that you get enough that you can use every other integration momentum to get a small factor except if you have ladders because there you already know that you don't gain anything now this graph is such that it doesn't have a ladder substructure but you could think of placing a ladder in between any of these lines or parallel to any of these lines and you will have to dig them out now fortunately in the permutation representation the ladders are very simply described basically those indices where you just advance or go back by a single step in the permutation and what one then this is one defines sorry how do you say you can artificially put extra lines to make a ladder but how many do you put? no no no maybe I said it wrongly this is just an example graph and okay I threw up this example yesterday and then I had drawn it nicely I realized it didn't have a ladder index inside so you can just you can just draw you one where there's a ladder inside if you don't have a ladder you leave it and you renormalize it as it is what do you mean I renormalize it as it is now the renormalization is done by changing the propagator now these are just estimates of the diagrams as there is nothing you have a diagram and you estimate so I mean you could also have for instance something like this whatever and then obviously you have you look at the substructure of these three parallel lines and they are actually a ladder and you see it in the graph of the permutation that this is just something which goes down step by step or it could go up step by step but whenever you have this substructure I've already shown you that the momentum structure is such that you can't expect any gain from there so you have to take them out in other words you don't have to take them out but you cannot you don't have a gain from them so basically for the ladders for the estimating remainders you just use the 1 over eta plus imaginary part estimate and so this is what's written here very briefly on this transparency one defines a degree of a permutation as the number of the non-ladder indices and the statement is then that the value of any Feynman graph with this permutation is bounded by lambda to some positive number times this degree that is the gain one can get by this iterative integration so basically you are fixing your permutation for the degree you say n minus d of sigma indices are ladder indices so they are derivative sequences and then of course you can easily estimate that the number of those which have degree d among all permutations of n elements it's 2n to the d instead of 2n to the n because the other ones are not really factorial because they are progressing linearly and then well it's a little bit like comparison you can just then look if you sum over graphs of a certain order n with a degree at least d you can simply verify by doing let's say entropy estimate and activity estimate if you really want to think of it this way you see that you get a convergent sum and you find that as long as you as long as the lambda is bigger than this a little bit bigger as the kappa then you have the sum of all these Feynman graphs is small and that's of course where the restriction on kappa comes from because if you want to prove this you have to know what gamma is and the gamma is for various reasons rather small some of the reasons are that as I told you if you do the indirectness estimate for instance on the lattice it's a lot worse than in the continuum you still had this negative power of eta and you have to well then the gamma just becomes smaller and smaller okay let's see and then there are many other technical points and so I haven't discussed them in any detail so first of all I've just told you that one can integrate and collect these indirectness estimates but every one of these small factors comes attached with a point singularity so you have to keep track of these that influences the algorithm that makes the gamma smaller then one has to have stopping rules for the expansion and the stopping rules are that well part of them are relatively simple you classify these collision histories in every step when you expand you ask whether you have a repetition of an index if it's an immediate repetition you do this renormalization if it's a non-immediate repetition so for instance if A1 is equal to A3 then we know it has to get paired in one contribution later but that gets a small factor that's a separate proof with a similar estimate like the indirectness estimate and one does an expansion where one decides beforehand to split things into sequences with repetitions and without repetitions so that is it's not a problem because one separates the expansion and collision histories from actually taking the expectation value but then you remember from the day before yesterday I gave you this tedious calculation for showing that there is this momentum conservation when you average over randomness and that required that you sum over all A's not just non-repeating A's so you get some additional error terms from that and this is a typical exclusion condition which is dealt with by polymer type expansion makes it a bit more complicated but it's not interesting let's say in general we can deal with non-gaussian randomness I will not discuss this here the higher moments are actually easier to bound and then there is one really nasty technical thing is that the dispersion relation on the lattice has non-convex level sets I think I have a drawing of level set yes here that's a typical level set of in three dimensions level set meaning this part down here is not part of it yeah it's just the stuff that's the lowest plane drawn here and that makes this indirectness estimate quite a bit harder I mean these non-convex Fermi surfaces or level sets they also have a big influence here in this equilibrium problem but at least there it's always the same level set whereas here we have this alpha parameter which you have to integrate and so it runs through all the level sets and every bad case is included so one has to work a little bit more to get this controlled okay I don't show any more I think this is the end of the review of this work on quantum diffusion and according to my original time plan I still wanted to do another topic which would be probably another two hours so let me just say a few words to end switch on the light again okay so there is the question of still longer time scales I haven't given it too much thought there are diagrammatic approaches based on physicist's works there's a paper by a Japanese physicist Hikami who did a resumption of diagrams for longer time scales he didn't formulate it in terms of time scales but for the localization problem actually in the physics community most people switch to a non-linear sigma model description of this localization problem which is due to Franz Wegner and which was much developed over time and I think I have nothing to say about that sigma model it's a very interesting and very difficult model sigma model with a non-compact target space the other question is about if you have more than one particle and maybe not have randomness but interactions between the particles so quantum dynamics of many body systems and it was already mentioned that for instance deriving the Boltzmann equation with a cubic non-linearity that's still an open problem and in fact some of the techniques we have here apply because if you think back to this introduction which I gave you you can cast it into the form where you have this expansion in loops and in path vertices with a certain average and one can use some of the bounds here but yeah we don't have a result where we can say that we have something on the kinetic time scale but certainly this can be used on a technical level to also address that problem okay then I think I have said enough thank you are there any questions? can you explain once more how you count the degree of the permutation yeah so the degree of the permutation here would be you have three ladder indices here and the total permutation is 1, 4, 7 7 minus 3 is 4 yeah of course in this picture it's extremely hard to see but if you go to the permutation picture it's very easy because the ladders are just in the graph there is just slope 1 increasing or slope 1 decreasing pieces and you just take them out but I find it strange that I would have just replaced those 3 by 1 because they are parallel but I find it strange just to remove 3 and not remove 2 because still we have a crossing between these 3 parallel lines and the other one maybe one can try to optimize it in this way but yeah actually you are right index is 2 because this index here is not a ladder index because ladder index says pi of j plus 1 is pi of j plus 1 or if it goes the other way it's pi of j minus 1 and if you now look at this one this is obviously not the case sorry if it is too naive but if the lattice model is so much harder to deal with the continuum one what is the motivation for doing the lattice on top of the component? well the motivation is maybe that the lattice problem is so much simpler to formulate and it's a well known problem and it doesn't model right? I mean for this quantum diffusion yes I think it's just a question what community you would like to address when you talk to condensed matter physicists they are all used to these lattice models and Anderson models so it's a problem that you can formulate simply and you want to have an answer so it's interesting to know if you look at other problems as I mentioned here in this it's actually if you really think of physics then lots of interesting things that are going on right now are in the mathematics where you have all these level sets which are non-convex I mean the entire high tc superconductivity theory is in that range right? it's not described by any continuum model as far as we know but for this here yeah it was just we wanted to include the Anderson model yep but it was also very naive so when you just talked about those level sets and how to extract the decay and the form of them is difficult and I can imagine I mean the dimensionality of the level sets this I by the way see and that there's some regularity needed in order to look locally but when you say more from the Fourier transform beyond that more than just regularity and the dimensionality is there an easy picture why is this necessary how one can do this the easiest picture is probably to go to position space and then if you look at the Fourier transform of such level sets the question is how fast do they decay the shape is also yes of course I mean this is standard harmonic analysis it is very important what the shape is especially if your curvature vanishes then your decay gets a lot slower you can actually rewrite some of these estimates just directly in position space for instance these recollision estimates and so then exactly a question whether certain powers of the propagator absolute value are summable so it's an LP question and these things very strongly depend on what your curvatures are actually it was surprising to see how little is known in general so the convex non-convex is mainly a question with the curvature in the end I mean the convex non-convex is also curvature plays a role but you remember we had this vector q and whenever you have a q for which part of the level set becomes congruent to the untranslated one for instance in this square case you have lots of such q's then changes very much I mean decay changes and all these estimates change these are very well known cases for instance if you look at the equilibrium cases this is the famous half filled hybrid model with all its peculiar properties and so basically it's one of the crucial things in all this and then like to go to a longer time scale the next class of diagram to resum would be the would it be the ladder or? This is actually what's described in this hikami paper so it's some kind of twisted ladders which you have to resum so you know you can take the ladder and twist the line so it gets naively you think it has a lot of crossings but in fact it has only a single crossing if you think of the permutational classification so that's down by a factor 1 over t but not more and if you want to go to a longer scales you will have to look at these diagrams too but these are not the only ones we cannot tell you which ones you all need This would not amount to addressing of the propagator it's a redefinition of the coupling Here the nice thing is if you just have to do it with a propagator you can do it all before you take averages and that's over Any other questions or comments? So somehow when you will analyze what you are able to do is to take into account part of the limiting equation which is the last term and so somehow you describe something which is supposed to be closer from the dynamics of this brute perturbation theory where you just expanded the so it's really possible to include more of the limiting equation so that essentially the error term should be smaller I think it will be possible but are you asking will it be possible in this expansion? Maybe yes This is what I mentioned as the non-linear sigma model if you do diagrammatic expansions for these you actually re-sum a lot more stuff but these things are mathematically not under control yet but I agree I mean it's one would hope for a more elegant method and the spirit is certainly that you identify what the important things in the equation are and then you modify your expansions in this way but there is no claim that this is the path to doing the problem on longer time scales. On the other hand it's also not completely hopeless and it was supposed to be an invitation but I don't think I've invited many people Any more questions or comments? Thank you monthly for the very nice presentation