 Well, I was asked to talk about renormalization and quantum diffusion Probably in view of some work that Laszlo Eder Schoeng-Tzerjar and I did actually quite some time ago and I Asked to change my title and you see it's again a little bit changed So it's more about diagrammatic expansion and renormalization in quantum physics And I plan to Discuss the following topics. I will give an overview About Feynman graph expansions. I Wasn't sure which audience to expect. I was thinking of an audience of analysts who maybe don't know Much about physics, but I see that I have an audience of experts So it's probably all going to be way too elementary for you But anyway, I'll give an overview about Feynman graph expansions and Related expansions also and then I will discuss this case of the quantum particle and the random environment And finally, maybe I have time to say a few Words about many body theory Okay, so let's start about Feynman graphs I think I made sure it won't ring Okay, so Feynman graphs, okay, so they were introduced by Feynman in the context of quantum Electrodynamics and then they sort of became ubiquitous in quantum field theory more generally And they have always had this aura of being mysterious and also Ugly so I will try to convince you that they are not Mysterious about the ugliness you can decide about yourself for yourself They are really A kind of a tool to calculate certain integrals and I Can say myself. I didn't grow up with this stuff. I actually hated it when I was a student But I've come to appreciate some aspects of it. There are some parts of analysis that I Don't know how to do anyway other than with Feynman diagrams. Of course. This is a function of time One always tries to avoid too much of it, but Sometimes it's useful And in that context of quantum field theory Feynman diagrams have been used formal perturbation expansions were a big topic in quantum field theory already in the 1960s names like Bogolubov, Hep and Zimmerman where until today of course and Since the 1960s of course a lot has been clarified and added and new interests have been added too. For instance, there is a whole Group of mathematicians now doing Feynman graph amplitudes with respect to with a view of algebraic geometry and There is also especially here Riemann-Hilbert problems hopf algebras, so there are many many Interesting mathematical aspects none of which I will touch because they concern individual amplitudes of Feynman graphs and we are more concerned with summing up the Feynman graphs and There is a big issue of convergence So convergence of the expansion is always going to be a power series expansion So it's just ordinary power series and then you you write the nth order term as a sum of a Feynman diagrams and They are just typically very many of them So typically when you estimate diagram by diagram You will get a series with zero radius of convergence And that's why I said one has to Rearrange the sums if you want to have some kind of convergence or you do an expansion to a finite order and then you of course have to see what the remainder term does and So I mentioned here in two basic things three expansions, which is a very efficient way of Resumming Feynman graph expansions and then there is also the loop vertex expansion, which I will explain also a little bit Which was basically invented and propagated by Manien and Rivasso who are here at Ecole Polytechnique and at Orsay Okay, and as I said, of course one often wants to avoid Feynman diagrams and to some extent this is possible There are very nice proofs of perturbative renormalizability That you can do totally without Feynman diagrams. It's called the flow equation method by Polchinski and For instance Christoph Kopper here at Ecole Polytechnique is an expert on this And that method is by now for perturbative purposes better than Feynman graph techniques, so Expect many things to be superseded eventually so let me at the beginning I'll try to do this both blackboard and Screen I hope it works. Otherwise, I will just switch to blackboard So Let me start with something very simple Suppose we have an End-dimensional space And we have a Gaussian measure with the covariance D. So it is allowed to be complex as long as its permission part is positive and So if you want to have it with a density then this is basically D mu D of Phi would be determinant of D times A to the minus Phi bar D inverse Phi and then you have a product D Phi Alpha bar wedge D Phi Alpha Over 2 Pi I something like this So that's what this measure is supposed to be and Well, it's a Gaussian measure so you can calculate all moments and the second moment is just the covariance D Beta Alpha and And then the higher moments can be calculated as well And this is the famous Physics it's famous as the weak rule. Probably it has other names as well And it takes the form here that if you take a product of the Phi's So you see Phi bar Alpha Phi Beta Then you get a permanent so you sum over permutations and then you get the covariance Beta I Alpha Pi of I so this is the it's like the determinant, but you don't have a minus sign For the not a sign of the permutation if you did the same thing for fermions You would have grassman variables and the determinant Okay, and of course you can convince yourself very easily by looking in The case n equals one that the permanent grows factorially Basically the definition of the gamma function which comes out when you do that integral and We will be concerned with such averages quite a bit And obviously can write down a generating function so this is not just the Laplace transform of the Measure and it's also a Gaussian Didn't put any eyes in the exponent so that it has a plus sign But that doesn't matter both sides are entire functions of J and J bar Okay, and Generating functions are extremely popular in field theory because they are provides slick ways of doing calculations for instance if you look at our Moment of order 2m Then obviously you can Write it as a derivative with respect to the J's By just taking the derivative out of the integral which is allowed because of the absolute convergence And then you plug in the formula for the generating function Yeah, actually I like it was this So I can hook on to my formulas Okay, so you fish it out of the generating function you just Calculate a the derivative of an exponential and then it's a simple Fourier identity that you can write it also as e to the e to some Laplacian acting on the original Polynomial or monomial in this case and the Laplacian is just defined Similarly to this quadratic form here except there is really a D here So it's the covariance which appears here and you have a form in the derivatives with respect to phi and phi bar It's an easy exercise to check that this is right Okay, and Now Suppose we would like to Calculate something which looks like like a partition function of it's usually called phi fourth theory so we have this Integral d mu d times e to the minus v and let's take the v to be quartic And now let's just be Careless and just expand we expand the thing in lambda. So we expand down minus lambda to the p to the over p factorial and Yeah, and then we we of course we we don't care so we exchange the summation With the integration. So we get a Gaussian average of well of an object Like we had it before except now we also since we have a sum overall alphas we take out P sums over alpha and then we have exactly this expectation over over the Gaussian measure of this monomial and So we can use our permanent formula. So this just gives some permanent which now depends on all these alphas and Yeah, since the permanent Well, now, let me show How one builds up these graphs from this? so Basically, let me Try to put what I had here So what do what do these things mean? These are these vertices and they actually get some kind of Orientation here as well Let me not try to reproduce everything here. Just so this is just supposed to mean that these are five bars and The in going ones are fies and then this This machinery here tells you that you expand down with the exponential Laplacian D and the Laplacian removes two of these variables. So It will remove a five bar and a five here. So it could put a line here. So this act of differentiation is Is depicted as a graph and then you can put as many lines as you want But you don't have to put them everywhere. So in particular things could be disconnected and everything. So this is This is the Feynman graph expansion for this example So it's very simple. In fact, it's just an algorithm for doing a Gaussian integral and Well, we have just noted that the permanent grows factorially and this is a permanent of order 2p because of Because we have a quartic term and so of course this expansion is divergent It has zero radius of convergence right to p factorial or p factorial that Convergence radius zero. So that is the in fact, this is the typical problem you encounter when you do Feynman graph expansions. It's Just the toy example Okay, so why are we looking at this integral? Well What so that's the question? What's the set of alphas? I just took it one to n, but you could think now of Degrees of freedom of your system. So you could take it literally maybe as a lattice in position space Then you know that n has to go to infinity if you want to take a continuum limit or thermodynamic limit So certainly one is interested in in this limit n going to infinity And if you think of a lattice, then you also know if you if you think of a discretization Let's say of a green's function of the Laplacian for instance that this will blow up when you go to coinciding points So which means that if you think of this limit as a continuum limit then the diagonal would typically go to infinity of your covariance and There is nothing that would prevent you from drawing such a diagram and then you have a problem taking the limit and So this is usually called an ultraviolet problem or short distance problem and Okay, the other Question is whether when you finally do your sum Over these alphas so this would now correspond to integrals over x is whether you have enough decay for these Summs to be convergent and that's typically also not the case and in fact the most important physical Examples are gapless. So I'm not not only the gapless are important There are many interesting systems which have a decay But if you think of a conductor, it's actually characterized by the fact that there is a slow decay in the in the correlation function So it's it's not a Stupid way of posing the problem. It's sort of the interesting effects. And so what one does then is One does renormalization and renormalization comes in In the way of ultraviolet and infrared renormalization the two are let's say operationally very similar but conceptually quite different The ultraviolet renormalization is a question of how you make a model well defined and Infrared renormalization is how to treat it efficiently to avoid artifacts by bad choices So it's more of a technical device Okay, and we'll see this in the graphical discussion Okay, and so so maybe as a little point of contact also to Matthew's lecture when you look at trace e to the minus beta h with an age of free particles plus interaction You can get similar structures. I mean by using path integrals for instance and So the essential question is how you do it with controlled remainder estimates Okay Just to finish this very general introduction before I get actually more specific. I would like to State that actually very many Quantum field theories or many body models can be cast in the form where you have such an integral and then a product of Let's say resolvent type operators So this is a correlation function for two m points You could actually think of one of these Reduced density operators that Matthew introduced. There is also an integral representation for those and You get formulas of that type So you should think I will give examples on the next slide but you should think of something which you know explicitly so typically differential operator and its Rinse function then and and then you have an interaction piece which depends on phi and The typical way one derives Feynman diagrams or in fact this is here It is more of the type of a loop vertex expansion as you expand this resolvance Now again, just formally you just write a geometric series first without worrying about convergence you write it down and Let's suppose if you have the example that the i phi is just proportional to phi Then you times times delta. So it's a local interaction Then this will describe some propagation with some propagator Then there is a phi and another propagator and so on So you will get long strings of propagations Times these five areas This is in fact exactly the structure which we will be analyzing later And I'm telling you this just structures actually generic for many models also in many body theory and here is a Little list. Yeah, by the way the v can of course also be equal to zero and then it's just a Gaussian average So I've drawn up a list of examples which I Most of which I will not of course discuss in detail, but most of them are similar. Yeah, so there's the you cover model Where the q zero is a dirac operator. So the gamma mu's Satisfies this Clifford algebra Mf is the fermion mass and then the i of phi is just phi at x. So this is a local interaction Formally completely similar although physically quite different is the quantum electrodynamics case and the phi is no longer scalar field, but it's a Vector field electromagnetic field and it it couples a slash so gamma mu a mu and Well, then the action the v is actually more complicated You have a field strength term and here there is a trace logarithm of the same object that appears here And that's The so-called fermion determinant which you get from integrating over fermionic degrees of freedom So for us, it's just we we take an average over the a field of these objects and Actually the formal and to some extent the combinatorial structure is the same For quite a few different models. So if you take a many fermion system, which now is again non relativistic That's a Euclidean model here with imaginary time and you have an operator d tau minus some kinetic term plus a chemical potential but anyway, it's a differential operator and If you take a many boson system, you would also have such an Operator and the difference in the two cases in this is in the sign of the determinant fermions have opposite sign Okay, so this just does a little bit of Let's say a view that many problems Take this form and then you can try to do expansions and for the quantum particle in a random environment, it's actually Slightly different in that one also has conjugate complex Somewhere so it's in here that we have yx instead of xy So but it's also of the form that we have a resolvent Evaluated and the H will actually contain a phi because of the randomness So And you can now expand all this in Feynman graphs The Feynman graphs will look a little bit different because the vertices have different Structure, right? They are more complicated here, but the general idea is the same You should think of this as a line a particle line going through and then the files get integrated and particles start interacting Okay So now after this general introduction I Would like to come to the thing that was ordered for this talk so for the random Schrodinger system So I would like to explain about the Feynman graphs one encounters there and Then we will discuss a little bit what their values are so we'll go through a few calculations and And then there is Will be put to use to show that there is diffusion on a certain time scale This is old work, but since I was asked I will review it here And I don't know how far I will get but I hope to be to have time at the end to Discuss a little bit Further along the lines that I just presented here Okay, so We're doing time wise So let me just describe The physical setup the physical setup is is a famous problem in physics Studied by Anderson first at the end of the 1950s. I believe So he started a Model of let's say an electron moving in a crystal and he in addition to the crystal potential Which of course gives rise to bloch waves and all these things he also put impurities and He then claimed that there is a phenomenon called localization And at the time this was so outlandish that nobody believed it now, of course You all know that localization is has been proven for various Parts of the the parameter regime and The real thing that is open is the thing that was taken for granted at the very beginning Namely that there is actually also a conducting phase of this model and of course what I discuss here does not imply anything of that kind because there is no scaling limit involved in that question That's just a question about infinite time without any scaling So anyway, it's a the motion of a particle According to Schrödinger's equation So we have a wave function and here I Formulated the Anderson model is really model on a lattice. So either the particle Has a kinetic term which describes hopping from one lattice point to the next so you can think of taking the discrete Laplacian and So you pose some initial condition psi of zero you can think of putting the particle at the origin and then you want to study the time evolution and Now the potential the Hamiltonian has this kinetic term minus one half the plus plus Lambda times the potential and the potential is supposed to be a random and It's a sum. So there is a an impurity at every lattice site And it has strength Va, so if a particle gets there it will get scattered maybe from the potential and These va are supposed to be I at random variables So now the the thing that I called phi Before is not a V So this is a random variable and you see it's exactly of this form, right? We have something explicit plus lambda times the sum over terms which are proportional to some random variable and Now you can ask what randomness one takes there is some freedom in that and the assumptions I would like to make here is that these The moments exist up to some order there are models where one looks at different randomness but this is one thing which I'd like to assume and Well, if you if you then ask about things like coupling constants you can Fix the M2 to 1 the M2 is what I called D before and Yeah, so we want to have to have Expectations so we want to have it centered and the odds Moments zero, but actually this is just for stating a generous theorem, but I will Later on just say we work with Gaussian variables so that we don't have too many terms to discuss And we are focusing on weak disorder For large disorder it has been proven first by Fröhlich and Spencer and then by Eisenmann Mochanoff and by many other people that there is a localized phase, which is well understood and I will also focus on three dimensions because some things are easier in higher dimensions So there is there is a similar model the in the continuum Which is the quantum Lorentz model Which just has the usual Laplacian and then you have to discuss a little bit what this potential is then you of course You don't put delta functions, but you put regular potentials at at certain points in space Turns out that the Latte's model is technically quite a bit harder than the continuum model in most respects Okay, so now this is what I Already discussed a little bit, but let me Stated once more So the question is how the solution starting from certain localized state does for large t and If there is no interaction term and these things are just block waves, so You can have you have some dispersion function e of k which in the continuous case is just k squared over 2 and Which in the lattice case is some sum over cosines, which is written here and The question is how does for instance the expectation of the position operator Scale with t and in this case It will go like t squared Except if you pick some Weird initial condition, but typically it goes like t squared So this is a ballistic movement x is proportional to t and If lambda is not zero then the expectation is that if lambda is very small And x squared goes like t in the diffusive phase and it remains fixed localized so bounded for for localized phase and Okay, so proving this is is one of these Major problems in mathematical physics Okay, so The object now one the question is what can one who should one study? I mean One can study greens functions a related object, which is useful to study is the so-called Wigner function The Wigner function is something Yeah, which one learns about in the first quantum mechanics course I suppose It's the question whether in spite of not having Any trajectories you can have a formulation of quantum mechanics in on phase space so where there are again positions and momenta and In fact, you can do that and The function is defined in this First of all weird-looking way, but it's not actually very weird at all if you think of If you think of a pure state in quantum mechanics and think of the projection Onto this state which usually is written like this then in physicists notation if you look at its Kernel read and then you take x and y so then this is psi of x times psi of y conjugate complex and so you Identify that if you have psi of x and psi of x and psi bar of y This is really just let's say the integral kernel for this projection operator And then this depends on x and y and you try to somehow formulate it that you have a dependence on a relative coordinate x minus y and Maybe the center of mass coordinate, but then it doesn't really matter But this combination of plus and minus y half really just says we do a Fourier transform in this relative Coordinate v so we have we take the Fourier transform here Now in the physical definition, of course, there is a some h bars involved which I dumped here Didn't have them in the Schrodinger equation anyway okay, so this is easily verified to be a real function and The only so you could think of it as a density on face space face space now being so position x and a dual variable v, which is like a velocity but it's not a density in the sense of a positive density it can take either sign and in fact it is positive exactly in the state is a Gaussian state, so it's very normally this takes negative values and Well, that's just a feature of quantum mechanics that you cannot represent it simply with positive densities on face space there are variants where you Where you Average so you you get some you don't have it point-wise But just an average sense is called the you see me function that can be positive if the averaging is such That the uncertainty principle is satisfied Okay, and so then Sort of other obvious features of this function are that if you Integrated over X then you get the the distribution the quantum mechanical distribution in momentum space and the other way round and In fact, if you take a second Fourier transforming the X variable, which we will do later It's an easy calculation to see that it just gives you this combination of psi and its conjugate complex in momentum space Okay, so this is a continuum definition if you want to do this on a lattice You have to be a little bit careful Because of this one half here that doesn't really work, but then you sort of go on tour That is with half the spacing and if you define it properly and it still works essentially in this way I will not go into any details of this Not so interesting Okay, and if you rescale X then okay Yeah, w is essentially psi squared. So you want an L one Behavior for the density so one has this to keep the integral of w Invariant obviously the integral of a w of a both variables is the integral of a psi X absolute squared So we assume this to be normalized to one as usual in quantum mechanics Okay, and now we're going to use this scaling. So this is in the same spirit as Many things that Discussed here. We're we're not looking at the microscopic scale anymore, but we are looking at the largest scale So we put in some epsilon and rescale these functions and What I'm going to discuss takes place on the diffusive time scale so it is As I said, it is Scaling limit of the type where you if you want to go to larger and larger times you have to make your interaction smaller and smaller So it's not any statement about I give you a lambda Let's say it's very small We can say something about all times, but it's this combined limit and So in other words, we rescale the little x to capital X with this factor so Red differently x is the macro capital X is the macroscopic variable And so in terms of capital X little x is very large lambda to the minus 2 minus Kappa over 2 and the same scaling well a similar scaling for the time and But the time is scaled slightly differently already hinting at the diffusion so if you have a the standard kinetic time scale which I will explain when it emerges out of the equations Then you if you compare to that you see that the x is still larger by a factor So in the on the kinetic time scale you get another factor lambda to the minus kappa over 2 for the x and Lambda to the minus kappa over t. So this squared is that pre factor. So that's this is Diffusive scaling it hints at diffusive scaling already Okay, so on the on the kinetic time scale addition yaw proved the occurrence of a linear Boltzmann equation as the effective dynamics and on the larger time scale the dynamics is then given by a diffusion equation and Let me just Flash this here Quickly, so there are some Notations But let's start at the bottom so here we have our Wigner function the we've even Wigner function is rescaled this epsilon here and We are looking at the quantum mechanical time evolution at at a time which is scaled with corresponding time Scaling for the diffusion equation and then we take the average over the randomness And then we test it with an Observable, so this is just a smooth function a test function and And This test function depends on the Microscopic variable, so it is very slowly varying on the microscopic scale Okay, so this is like applying this object to this test function and Then we take the limit epsilon going to zero and It's the same as if you took if you tested the function f tx e of v on the observable and That function solves a diffusion equation with a certain diffusion constant which is specified up there and The localized initial condition in the microscopic variables is basically a delta as initial condition and so the Feature of this diffusion is that the diffusion does not mix energies Yeah diffusion constant is calculated other fixed energy and You see that this So the diffusion equation depends on energy and Only at the end it gets actually averaged over e of v What is the small Yes e of v so e of v Is v squared over 2 in the continuum case and in the other cases there is no change in Doesn't get re-normalized. So let me just do one minus cos and cos sign v J equals 1 to D So in our case D is 3 So it's it's the it's the symbol of the kinetic term you started with And the integration over x there is this the discrete case because you yes, yes This is this is for the Anderson model. Yes, and Okay, so there is I Didn't say anything about kappa kappa is positive. So kappa is actually very small There is an idea that this should work up to kappa equal to two, but That was sort of optimistically at the beginning, but then when you start losing Losing factors in estimates it gets very small So I think in the continuum case one can choose kappa as one over a thousand But in the discrete case, it's even much smaller. So it's really just a little bit above two But that makes a big difference for everything Okay, so and now Basically, I think This point I can mainly switch to blackboard to discuss a few of these things. Yeah Yeah, maybe I do that now So we don't need the projection anymore. Just plug it here. Yeah So what I would like to discuss now is how we deal with this with Feynman graphs, right? And Okay, so the way this is done is that we take the do you have mail formula and iterate it Okay, so Do you have mail formula? Looks like this e to the minus it h Equal e to the minus it h 0 Thus integral 0 to t ds e to the minus I t minus s h and then we have v times e to the minus I s H 0 Think that's it No, of course, there is a Factor missing minus i lambda v. So this is for H equals h 0 plus lambda times v. Just look we just had it and Is there a zero missing in the at the h in the material here? Yeah. No, there is no zero It's it's you see this I will Okay, I wasn't planning to derive the formula you just you just write sort of time-ordered integrals, but I will actually Since I want to in the spirit of the of the introduction Let me Do the following I Want to compare it immediately to What we had before and We can just buy spectral theorem and residues right This is just a Residue formula which expresses I mean of course you know that you can express every function of the operator by its Resolvent so here is the resolvent alpha minus h so the alpha is moved into the upper half plane by some offset eta and Yeah, if you do this contour integral then you pick up this one pole and It gives you e to the minus it h, you know, and now let's call this Let's say our H Alpha plus i eta that's usual notation for a resolvent and Then you may remember the resolvent equation that our H of set is our H 0 of set plus our H of set V times H 0 of set So standard resolvent equation Exactly Corresponds to this do you have any question up here? so there has to be an R because you use the resolvent equation to generate maybe a geometric series and Okay, and So Okay, here of course that is alpha plus i eta Discuss this later. So this formula will actually be used on a technical level as well You can you can there are two ways of so the procedure is not going to be the following Remember what I showed you on one of these slides is that we had a covariance or a Resolvent which gets expanded. This is exactly what you can do in this formula here and Then you get exactly the structure that I showed you before with the phi is replaced by V's and Of course you can translate this up here, which just means iterating the Duhamel equation and You will always get one term where there is a full H and then all other terms where there is just an H H 0 in this equation and In fact one can use both of them for doing estimates the resolvent Representation is preferable because If you we will see that if you do things with e to the ish zero, I mean these are unitarious you can only Bound them by one whereas in the resolvent case you you have some some more information somehow Okay, and now we can iterate this out That gives us Something like psi of t is a sum psi n of t Then equals 0 to n minus 1 plus some Remain the term psi n of t And so the psi little n's are then given by whatever minus i lambda to the power n Integral and Now we have integrals over s So this is over 0 t to the n plus 1 and We do have the condition that the sum over sj has to be equal to t so it's the usual simplex of Times that get integrated and then we have e to the minus i sn h0 v Isn minus 1 h0 v and so on and then some point we Initial condition so this just comes from from Yeah, you apply this to psi 0 and you iterate and you get all these explicit terms here with these insertions of v and a remainder term and the remainder term is of course Well minus i lambda integral 0 to t d s e to the minus i t minus s h psi n Well psi n minus 1 There's a v missing of course, yeah, okay, so This is the expansion for the We function and Okay, now it should be now I mean look should look for familiar to the case we discussed before So we we can still insert now Everywhere, but this is right. This was a sum over let's say x The alpha times Delta alpha x and Yeah, we insert this Everywhere right so you see what you get here is a Long sum over all these alphas Of all these terms with the v's placed at these between the free propagations and typically this we will denote now By this graph So so this would be v alpha 1 v alpha 2 and so on going up to Whatever and we have all these these lines here correspond either to e to the minus s H or if we look at the resolvent picture it would be then something like alpha minus h plus i eta Inverse so now I see we have two alpha. So let me Um, yeah, let me not change this Okay, so these are different alphas Okay, um, in fact, I call them a in my notes. So let's call them a here too Let's stick to a is here Okay, and uh Yeah, and so we uh, we actually Can think of an interpretation of this Sort of in in microscopic physical sense you start with a psi zero you do a free time evolution And then you have an interaction with some v and if we Insert all these sums then this means basically we have a some scattering event at some A zero here. So the numbering here is the wrong way around And then you propagate again With some such factor then you have another scattering you propagate again. So that's the Intuitive notion one has about this Shouldn't take it too seriously, but you can have you can sort of Have an intuition this way And so important thing here to note is that these times Add up to t. So this is the usual thing and what is usually called Time-dependent perturbation theory Okay Yeah, and then Then remember what we had we had a vignette function Which contains two of these psi psi and the psi bar And so we have to do this twice And put it together And Yeah, we'll do this over time Let me maybe Say something else first namely It is actually very important that we can write it in this way You could expand now to infinite order and then you don't know what Convergence properties you really have but since we have this Psi n of t here You see that there is the obvious estimate So that's the unitarity estimate And h is self-adjoint. So the norm of psi n of t Well It's bounded by lambda times integral zero to t ds norm of e to the minus i teaming minus s h v psi n minus one of s so obviously This is bounded by lambda times Integral over ds. So we can just since this is unitary we can just leave it out in the norm And then you can decide what you would like to do. So you could maybe continue with the bound lambda times t times The soup of the psi n minus one s for instance The crucial observation here in this unitarity bound is that You lose essentially a factor of t But you gain The absence of this Because now everything is totally explicit you have you have I mean these these are integrals which you can calculate This is a way of doing perturbation theory if you want you can calculate every These integrals, of course, you will discover very quickly that you can basically do none of them explicitly, but you can do Yeah, you can do estimates very well on them and Okay, so this is One important observation And the second observation which makes it actually useful for us is that The vignette function is continuous in the l2 norm So one can actually use it to write down the An estimate for the vignette function that we'll just write it down here. So You can easily verify so that we had this integral of Vignette function with this observable which I now write in bracket format and So we are looking at the difference Of the true time evolution to the one which is explicit So this first term in this expansion So the finite order to a male expansion and that is bounded by well, there is some pre-factor expectation of some psi n squared Times t times the sup Expectation of v Okay, so this inequality tells us that we have An error estimate for the object. We are interested in remember. We wanted to show that this converges in the limit We have a factor t here, which one has to Take care of and then one has this supremum Again of an explicit term So now there are no h's in the game anymore, but just h0's And we can really start estimating distributions okay Maybe we can take a break of five minutes. Let me say as a last thing before the break Much of this carries over to the many-body case But when you do this unitarity estimate The many-body case you lose connectedness of your diagrams And that is Disaster. So this is one of these estimates which do not obviously carry over to are not obviously useful in the more general setting So I think we we can take a little break of five minutes and continue then Okay, so I mean you you prove in the end something that you have a diffuser Condition and you also said that this is I mean that there's no way to make a statement about the spectrum I'm not saying There is no way to make a statement about the spectrum. I just don't know Okay, so there's a short answer and the long answer Short answer is of course infinitely far away And the long answer is that this is still a short time scale because Because the diffusion doesn't mix energies In the end you expect just the diffusion equation which As a diffusion constant, which is not energy dependent in the way. I wrote it down here But I guess you would have to go beyond lambda to the minus four to see it here Which is as I said infinitely far away But you know, maybe one can cover an infinite distance in a finite time. Who knows Okay Yeah, so now Now we've warmed up and now Sports start sort of because we have all the ingredients to Go into the details And the first thing one does is now you remember that the The Wigner function was this product of tube size and What we do is We just write that down and see how it how it turns out so We we now get Let me just see this. This is the point about Feynman graphs. You can actually Talk first without writing a huge formula So we have this So we have this for the psi And the psi bar also has this structure And now that we have uh, I mean this of course is already implicitly assuming that we have expanded out to finite order Which I've just justified to you And so we obviously in general have different orders upstairs and downstairs and okay, so then then you get some formula for this and um If you want the formula for that It's actually pretty lengthy. Yes, so let's see So w n and tilde So we had this external variables v and xi of the Wigner function. Then there is all this stuff and so um So the d mu n Would be this d mu n of s And uh, yeah, and then comes all this stuff e to the minus i s n is tilde n e of k tilde n And uh So kind of thing you don't want to write on a blackboard but anyway So then we have a product of potentials and we call them v hat of k j minus k j minus one So I had zero of k zero And uh, and then we have the same thing So this is j equals zero to n minus one. This is l equals zero to n tilde minus one See this is not very nice to write down, but it's in principle. It's straightforward This is also complex conjugate So all I did is is is the following here. First of all Something which I didn't write down here. I can now say Any color here on screen um Here I really now want to look at psi Psi hat. So if I started psi zero hat Then this here acts as a multiplication operator obviously and since v is a is multiplication position space It will be a convolution. So we will get k one minus k zero integrated over k one And then you just go on like this. This is actually the the structure which you see down here, right? This is exactly this thing product of of the v's and So let me draw Label the graph so that we can See this now. So this would be K zero k one. I now draw it from the left changing convention And we have k tilde zero k tilde one And now to think about these things, um, you should think about momentum flows And so if you look at this you have k zero and k one. Let's just take the convention here to draw this arrow here Then you see that What you get What we will eventually get are just kirchhoff rules for for the flow of momentum Yeah, so you could think of these as pipes and momentum flows here k zero And then here it's k one And you can think of the rest the difference k one minus k zero flowing down here And so actually this flowing down here is k zero minus k one because it has to Split up And then okay, so this for the moment is just Just ending up at these potentials and we still have to We still have to integrate over the randomness average over the randomness to To get some Feynman graph which we can analyze so but this is Okay, this is just a straightforward writing out of the thing And obviously you can do a similar Um Representation just by by taking the the alpha integral which we had Actually, I just erased it So, um, I'm not supposed to write here. So let me just say this You can also write it Well, let me write it in here as well Okay, and then we have integral d alpha At some point I will drop all these two piles because it gets too long to write everything And then instead of all these phase factors one actually gets a product of c eight as of alpha and k j So Instead of these Oscillating factors, we just have And then there is the psi hat zero of k zero psi At zero of k tilde zero conjugate complex And then there is a big Factor, which I just call v of k and k tilde. So all these potential indices here And the c eta Of alpha and k is of course one over alpha minus e of k plus i eta Okay, so this is just the Green's function version of the same formula And in general the idea is if you Try to do low order calculations most of the time you want to have term explicitly. This is The better representation, but if you really want to analyze high orders then this is The best representation or at least the one that's that works best for us And I should say that this is conjugate complex Now because of this factor e to the two t eta, which we got from the residues we will Pick Eta equal to one over t So that this factor remains just bounded and so then you see that The situation is you have An integral with a lot of near singularities and the singularities are just avoided by basically something of the order of t And t will get large of course Okay, and and then The question is how to arrange the sum in order to do anything with it and Of course This is still a non-averaged w and Let me again try color here if we did Expectation so Then we Integrate over these variables you remember these were sums over little v alphas times some fixed interaction function in this case delta so here we will get Random variables which are now integrated And this integration just gives us It pairs up these lines like I explained at the beginning Except that these are real variables So it's actually even simpler than in a complex case. So what this will do is it's going to connect these lines so then Need some more upstairs To avoid So then we can do this And whatever I mean we can Pair them up in any way we want. Let's do this and that And then Go here for instance. So that would be a possible pairing and these are the Feynman graphs. We're going to discuss here So the expectation Pairs up and what does the pairing up of the lines Really mean and I will I will show this to you in one low order example See that you see it in And it's in full detail for that just Okay, so We probably don't do only one or two examples. So let's let's just look at this W 11 of c V and look me into not the t here too So as a graph For the moment it looks like this and So if we write it down, of course, it's again this Rather large thing we have the v hat of k1 minus k0 And then downstairs we have something similar Okay, conjugate complex And when we do the average of course in the graphical picture, it's obviously only this can happen and this is The average of these two factors and I'll do this although it's Completely elementary in principle This gives us a back translation in variance and momentum conservation Actually at this point. So what we do here is if I put this in Let me just Do it in the case So what we get here is of course you remember that These were all sums over terms associated to different a's And then we have we had a delta x a so we we essentially get e to the i times maybe minus i times a k 1 minus k0 times e to the plus i a tilde times k k1 tilde minus k0 tilde And then we have the expectation value of these variables. Let me know I have too many v's here. Let me write phi a phi a tilde Now this is comes from plugging in the the potential which we had it was a sum of of terms located at these a's And because they're located the Fourier transform is just a plane wave And then we have this and this of course by our assumption is delta a a prime a tilde And so we got really only one sum and you recognize this sum as a as a as a delta function So this will then become delta of k1 minus k0 minus k tilde 1 minus k tilde 0 And this is maybe the most important thing you should keep in mind from this This means that taking the expectation just Continues the momentum flow from upstairs to downstairs And so now when we look at the large graph we can think of this as a as a Momentum flow as being continuous through all these These lines Okay, so now We can plug this in And then what you get and what I will not do in much detail here Obviously is that you can do these for instance this s tilde integral And the s tilde integral will turn out to give you a An energy conservation Of course, it's an approximate conservation because s tilde is only integrated up to t So you don't get really a conservation, but only It's a smooth inversion of the delta function But if you take this In the limit So the w11 All this Converges In the limit we are interested in simply To Converges to a pre-factor lambda squared Okay, so then there is a factor t Times integral dk zero So and then there is some scattering function, which I remove here psi hat zero of k zero Times two pi delta Of e of k minus e of k1 minus e of k zero And so the factor t Comes from this Well from the time integral and the identification of this Energy conserving delta function What you see here at this end result is you have the kinetic time. This is what's called the Kinetic time t And you have energy conservation and there must also be momentum conservation, which yeah Which removes one of the k integrals and This is actually one term in the Boltzmann equation. It's it's the last term in the Boltzmann equation Actually, it's the gain term. Sorry. It's the gain term You have a k zero which goes into k one Which is the thing that comes out at the end So this term corresponds to one of the terms in the Boltzmann equation And then there is another graph, which is this graph Where you pair upstairs Which gives you a lost term in the linear Boltzmann equation So I don't want to go into the details of these terms Unless you ask me to but it's more of this calculational Stuff Except to say that at lowest order you identify what the terms in in the Boltzmann equation are And then you can imagine that when you iterate the equation that you will get Repetitions of these And you will get repetitions of these And you get a zillion of other graphs, but in fact those are the important ones so Somehow Feynman graph expansions work always when you the leading term looks simple if the leading term is already terrible then It's maybe not the best idea to start that way at all Okay, okay, I've left out a lot of details in in this Calculation, but it's really Details of the type that you identify such integrals as of something like sine x over x right, which is a typical approximate Delta function if scaled appropriately Well term is really advanced Okay, and so Now we're basically At the beginning of of saying what we how we could actually estimate graphs. I mean you It should be clear from this that even doing a one What is called a one loop graph? Yeah by adding this, you know you in our count loops in the graph Is is tedious right, but that's uh That's not a bug that's a feature because I mean these things encode all the physical information about about the model And these correlation functions. They have a lot of interesting structure It just would it's not very interesting to write it all down here. What I will now Probably be able to do in the rest of today is Uh Discuss a little bit how one can estimate graphs in general And then I will focus On this graph in particular and and and show you What its order of magnitude is Let's see if I can get this done today And then we Yes Yes, yes, they're geometric series in fact I will show you that Okay, so um Yeah, let's see. What did we want to Did I want to show here? Okay, so one can um Yeah, first of all, maybe I should I should still I haven't I haven't Completed this formula. I wrote the e on the left hand side But I didn't write the e on the right hand side so As I told you the taking the expectation now introduces all these pairings And that means By the calculation we just did that we introduce a lot of delta functions, right? So what we really get here now is a sum In addition, so this is because of the expectation we get The sum of deltas and let me just write A delta plus of k Delta minus of k tilde times delta m of k and k tilde And since I've written already so much on the board, let me just not Write it out what they are but just describe what they are um We have seen In this calculation that the pairing gives us a delta function Now This collects all delta functions which are upstairs Downstairs and the ones going from up to down is in delta m So now we have basically a complete Description of our graph values And we can try to do estimates On these graphs and So there are a number of estimates which one can do In a very simple way But which are not All that useful For instance, you could think of um Let's look at this upstairs in the in the representation where we still have the times and the oscillating factors Of course something one can do in this representation always is you just estimate by absolute values And then all these oscillatory factors just drop out Let's suppose I mean we have already averaged over the v and We we have these delta functions So I just think of this being Replaced by this product of deltas here um Then we can do the integral over the s variables It's no longer on the board But you will remember that the s variables were integrated over a simplex right t is the Largest time and then the s's have to add up to t So the such a simplex has a volume of t to the n over n factorial So there would be an easy estimate Bounded by lambda to n plus n tilde t to the n over n factorial t to the n tilde over n tilde factorial And then there is still the integrals with the psi zeros I just tried it this way And so this is proportional to lambda t to the n plus n tilde over n factorial n tilde factorial um And you see that this is Not such a great bound because if we expect That lambda square t is the appropriate quantity Then this is really very large So it will diverge in the limit even in the kinetic limit. This is going to be divergent. So I mean sort of obvious to anyone who who knows a bit about quantum mechanics It's a very bad idea to To to dump all these oscillatory factors and that's why One uses when one starts doing bigger estimates now one uses this Where you can use absolute values On these factors and still get something useful out of it because you still have a k dependence in them And the first Bound I would like to show to you is I mean now we can this is the following So suppose we consider a pure Up down Pairing in other words, there are no Pairings bit upstairs downstairs Separately so it could be something like this for instance. We don't have anything of this type In here then this The form of the Of the value of the graph So this would be the value of a particular graph in this in the sum over graphs Well, okay, we have all these Cs We do have these Integrals over alpha and beta Let me just see do I want to do this representation And just sorry. Oh, let's stick for the oscillatory for a moment And If we have this then we have the following structure We have a function k n Of p and t or little k and t k and tilde of k tilde and t Right down in moment what they are Psi zero of k zero Psi zero of k tilde zero And then we have this Delta function Which I just described in words and then we have Let me just write this d n k d n k tilde And so what I did here Is actually I the only thing I did Is I introduced the short hand k n of k and t Which is still in the Timer presentation d mu n of s Product of e to the minus i s j e of k j so then The value of the graph takes this form and All we now do Is we take a schwarz inequality of this We write this As the integration measure, right? This essentially means Maybe I should have explained that first What do we have here? We have integrals over all the k's k zero k one and so on And we do integrate the k tildes too, but we have delta functions So what remains of this? Or what remains is in general when you have a graph You can fix momenta in terms of so-called loop momenta And the way this goes Is that one Picks a spanning tree of this graph and So for instance, we would put all these Pairing lines into the tree And in this case we can also put All of the lines downstairs This is obviously a tree Because we have only up-down pairing, right? They cannot get you can't get any loop in this way And so this means I mean the general procedure then that you can determine of course By the momentum conservation rules which you have on this graph You can determine The momenta on this one right because here k zero goes in k one goes out. So here k zero minus k one goes through and that fixes This momentum then we can do this all here And then we can go back and Fix starting from k zero. So in other words the general procedure is if you have a tree, you know You always have leaves vertices of incidence one And you go in from the leaves and fix all momentum This is a unique procedure if you have a spanning tree and now In this case, this just means we can just remove the k tilde integration completely Or we could remove the k integration completely this is our choice and The rest of the integrand There's an obvious factorization structure here. So let's do just a schwarz inequality and We will get but so this is Let me allow myself to write it in this way and now you see In this term here We still have got an integration of a k tilde, but the integrand is independent of k tilde So we can just kill that or in other words we could also for free replace this by saying Instead of any pairing we introduce the pairing which corresponds to the identity. So that means we're having Identity pairing is this Because if we do that Then we can again write this as k n Well, this should become x conjugate. Sorry times k n tilde bar And we see that this is actually the value of the graph Corresponding to the identity And so this general argument shows us in just schwarz inequality shows us that the value of a general graph With only up-down pairings. So this is the only part of the graph is actually bounded in absolute value by the Value of the ladder graph. So this is certainly the largest contribution But as we know, there are very many contributions. So that by itself doesn't Tell us too much But it shows that it might be good to investigate this Ladder graph first And Yes, at this stage what you prove is that say all the other graphs are bounded from About from above by the ladder graph, but not the sum of all other graphs Yes, exactly each one each one and they are in factorial minus one of them. Yeah So, um, it's exactly the point that one not only one has to prove that I mean It's not sufficient to prove anything for any individual graph. You have to look at the sum of graphs And the fact that we have an expansion with a remainder term allows us to do this if we had to expand up to infinite order We'd have a real combinatorial problem for this expansion So I suppose I'm almost out of time. So I will not write this down But next time I will actually do this calculation for the ladder graph. It's an important calculation and We will see That this is really The term which fits to the kinetic scale lambda square t And I will also discuss then Contribution of these graphs And then we'll try to put things together By renormalizing Well, that's called renormalization you will see that it's Nothing mysterious Okay, thanks. That's it for today At which order do you expect to see the quantum effects because here there's a ladder graphs and the other one? I don't know their name You essentially have only classical effects And so what is the the order where you Would see something like a quantum correction Uh Actually would say all of them are quantum corrections Yeah, so I mean this of course corresponds to a classical term, which is the reason why there is then a Classical equation at the end But all of these Are quantum corrections So basically the if you want the proof is to show that in this scaling limit the quantum corrections die out But it's uh By itself the problem is never really classical. It's uh It's also not semi classical There's no separation of scales between the between the The the typical let's say quantum wavelength if you think of uh H bar over p or simply one over p for the quantum wavelength and the and the Um The range over which the potential varies those remain the same. This is not the scale separation Yeah, it's not the scale separation But for instance if you look at the fluctuations, do you see the quantum effect? Because this is the lower quantum Well, okay, you would see them if you if you try to determine the asymptotics more precisely in the limiting state And you don't see them I mean the the the question that isn't then interesting is is um What happens when you go? I mean when you go beyond lambda square t, which is after all the the subject of this and then um This argument doesn't help you anymore than maybe one of the quantum effects is is this term here, which uh, which generates um slightly different scaling behavior and uh I've never thought about it in terms of quantum classical. So I'm afraid I can't give any better answer right now Yes um, so you've described here the Uh Sort of a weak coupling limit small lambda to fix lambda Of course, and you look at localized potentials Then and then again Irish have have done this for the kinetic scaling Can one also extend this to the To the sort of low density limit what you are describing or the terms getting too big I don't know how I mean we really Need the weak coupling everywhere Basically, I mean it's possible one can do it, but I don't know how Okay, any more questions or Maybe online. There's something Sergio Okay, maybe then we go into the coffee break this course will uh, then continue on a Thursday morning and we continue in five an hour