 You have heard already about the quantum Lorentz gas in Manfred Samhofer's lecture. The Lorentz gas is a much simpler system than the full Boltzmann gas because it's essentially a one particle dynamics where we have non-interacting particles moving in a fixed array of scatterers. Lorentz set this up in around 1904, 1905 as a model of electrons moving in metal. Of course, this was pre-modern quantum mechanics and he also used the model for the scatterers as hard spheres. Now, of course, as you've also seen in Manfred Samhofer's lecture, we are particularly interested when you replace the hard spheres by smoother potentials and that's what we'll do in particular in the quantum setting, that's important. Right, so here's the Lorentz gas. I'm interested not in the weak coupling limit where you make your potential smaller but in the low density limit, so just like in the classical setting of Boltzmann and Lorentz. So it means you shrink your scatterers and as you shrink them, as you all know the free path lengths, let me stick with us, the mean free path lengths increases in two dimensions like one over the scatterer so if you wouldn't rescale in the limit you would just get free motion. So we need to rescale our length and time units to get a non-trivial limit. In d-dimensions, the mean free path lengths scales like one over the total scattering cross-section of each individual scatterer which goes like 1 over r to the d-minus 1 and so you see here introducing new macroscopic variables which are denoted by x and y. If you scale the old micro, so that's a typo, I apologize, there should be a Q. This is the microscopic variable and so rescale by the mean free path and you also rescale the time by the mean collision time which scales exactly in the same way because we're moving with constant speed in this model. And so now we have a flow in this rescaled Lorentz gas and I'm just going to show you the pictures, how it looks like now in these macroscopic coordinates. So this is the original picture, the mean free path here is 2 and remember it just went up as 1 over r. So if we now rescale our space units in the appropriate way so that the macroscopic mean free path in these coordinates is normalized to be 1 then the pictures just look like that. So in some sense you just zoom out and you can see here or imagine that now we expect a limiting process in the classical sense where a particle would fly for some microscopic constant time then scatter, fly in another direction and so on. And indeed that's what Lorentz's heuristic was based on really following Boltzmann's original ideas is that you get a limiting random flight process in this way that is governed by the linear Boltzmann equation that I've written down here. So this is the linear Boltzmann equation that I think all of us have seen by now and in the classical setting for instance the differential cross section that appears here for hard scatterers has this explicit formula for more general smooth potentials you get analogous formula that are well known and even though Boltzmann's idea and Lorentz's idea was originally to sort of use this as a step as was very nicely explained in Lorentz's lecture to understand the laws of thermodynamics and give a microscopic justification for them this equation has by now seen many many applications in various areas. Okay, now the derivation of the linear Boltzmann equation from this microscopic model the Lorentz guess is by now quite well understood if your scatterers are placed at points that are random then the early works of Galavoti and Spohn and Baldrigini, Bonimovic and Cinae showed that indeed you can rigorously derive the linear Boltzmann equation Galavoti and Spohn proved this in the so-called annealed situation when you keep averaging over your scatterer configuration and Baldrigini, Bonimovic and Cinae in a really beautiful paper proved that in fact the convergence holds for a typical realization of a Poisson point process as the scattering centers and I'm sure Barlin-Taut in the next talk will review that bit of the literature in a bit more detail than I will do here. The periodic Lorentz guess has some very beautiful surprising features so if you now place your scatterers on a periodic grid as I've showed you on those slides before there was a random and next to it the periodic in fact, François Golds observed that because the observation that the distribution of the free path length so the first hitting time has a very heavy tail the linear Boltzmann equation can't hold in fact as a limit and in work with Emanuele Gallioti he identified the limit process in two dimensions and then in higher dimensions Andrea Strömbachsson and I were able to prove the convergence of the Lorentz guess to a limit process in arbitrary dimensions using a gothic theory on the space of lattices and more recently we've extended this to quasi-crystals and other scatterer configurations that are not necessarily periodic as well as to sort of soft potentials under certain assumptions. So in some sense the situation for the Boltzmann grad limit of the classical Lorentz guess is well understood now and my next step was then to say well what about the quantum Lorentz guess and the situation here is more complicated and open in general there have been important papers again by Herbert Spohn, Erdos and Jao who looked at the weak coupling limit of the Lorentz guess on the kinetic scale and Manfred Zahnhofer mentioned this in his lecture where of course if you take longer times you are in the scaling limit that he was talking about where we will see diffusion and for me of course I didn't want to look at the weak coupling limit but at exactly the same limit as in the classical setting which is the low density limit where you shrink the radius of your scatterers and you rescale time and space units in exactly the same way and for the random setting there's a very long paper by Eng and Erdos who adapted the techniques used in the paper of Erdos and Jao to deal with precisely this limit and where they proved again that in the annealed setting for randomly distributed scatterers you converge to the linear Boltzmann equation and the obvious question now is what happens for non-random scatterer configurations for example and in particular for the periodic setting now if you read some of the papers in the surveys usually what they say the periodic case is easy because we understand everything solid state physics to a big extent is based on Floquet-Bloch theory which says that if you have a periodic potential you can decompose it, you understand the band structures and so on so it's solved but it's not and I'm trying to convince you in this lecture it's actually not easy because if you look at the particular scaling limits that I'll discuss it's actually harder than the classical case and so I think this sort of slight paradox that you read the literature and the statement is the classical case is hard, the quantum case is easy that's not so so I hope when you leave this lecture I'll have convinced you otherwise come and see me in the coffee break and we can continue and in particular I'll talk about papers that came out recently with Jory Griffin a former student of mine who is now at the University of Oklahoma and in particular what I want to impress on you is that we see a new limit process in this scaling so precisely the same scaling used by Eng and Erdisch the periodic case produces something very interesting so here's the setup we study the classical non-relativistic Schroding equation the quantum Hamiltonian is just given by the Laplacian plus a potential I've put a lambda in here for reasons that become clear later really think of lambda as being 1 so lambda is fixed we're not looking at the weak coupling limit where lambda is small we're looking at the low density limit where the scatterers are rescaled in this way so my potential v is a superposition of single side potentials w let's assume they're in Schwarz's class that are identical at each scatterer so the w is the same at each scatterer it's scaled by a factor of r so it's small and that's it that's our potential the idea now is that we take r to 0 so r is somehow defining the, if you like, the effect of support of this potential w if w would have compact support the solution of our Schroding equation is given by a unitary operator that acts on the initial state and that's exactly given by this equation here so everything's very explicit and this is the situation we have so if you like you start with some initial wave packet h that appeared in front of the Laplacian you can think of describing the scale of the wavelength of our initial packet this, if you remember, r to the d minus 1 over r to the d minus 1 is the mean free path so that's our microscopic scale our classical microscopic scale the scatterers are separated by 1 and what we'll do, in fact the limit that I want to study which I think is the most interesting limit and that's also the one that Eng and Erdog studied is when the size of the scatterer is comparable with the wavelength and as you will see the reason why this is, to me, the most interesting limit is you get a very beautiful combination between the semi-classical propagation between scattering events and then proper classical proper quantum scattering at the scatterer because you can see quantum scattering because the wavelength is comparable with the size of your scatterer right in Manfred Samhofer's lecture you saw the use of the Wigner function to describe the phase-based distribution of the quantum system I'm using something that is virtually equivalent to it which is the notion of pseudo-differential operators you see the structures is very similar here as in the definition of the Wigner function for those who haven't seen this let me just say the idea here is basically that you take a function in your phase space where x describes position and y the momentum of your particle and then you associate with it an operator and that operator, if you like represents the phase-based distribution of the quantum state you apply it to so the precise way this operator is defined it's written down here so a is our, take a to be a Schwarz function an rd cross rd the phase space then you apply this integral transform here to your favorite function f and that defines your op a but now remember and this is the really important trick is that we need to scale things correctly so we actually go into the right limit and as you remember we want to scale our space space units by the mean free path so I've put the r to the d minus 1 here so we start with our observable but we now measure everything on the macroscopic scale because we want to survive the macroscopic scale we want to see a in the limit the little a and similarly remember that momentum is y and we now scale momentum by h h is our wavelength so we measure our momentum in units of the wavelength and again that's the right scaling to see something non-trivial in the limit so that's basically the setting as I said if you haven't seen this just ignore it just what you need to remember I've associated with a classical observable in my phase space a quantum observable now I'm going to take the limit and I want to see something like the linear Boltzmann equation for the a that's the idea, the operator should all disappear when I take this limit so these are the questions pick your favorite scatterer configuration p random or deterministic random as in Eng and Erdisch deterministic as in the lattice case and what you want to understand now is if you start with a time-propagated a so a did I define this here oh yeah it's up here sorry a is now starting with an initial quantum observable then propagating it with the solutions of our Schrodinger equation so the propagators corresponding to the Schrodinger equation you get then an observable at time t and the idea now is we choose a to be the initial a to be given by my quantized classical observable so that's my initial state and I propagated and I want to understand where is it at time t on the Boltzmann grad scaling so that's why you see again here as in the classical setting we measure time t in exactly the same units and then I just test it against a test observable b so that's exactly like in the classical situation in terms of weak convergence so that's the quantum analog of if you like weak convergence where we just now have an inner product in our operator space with respect to the Hilbert-Smith norm and the question is do we get a limit a limit like this where we want to then identify a family of operators L t that describe this limit as you see on the right hand side there's nothing quantum left here just the classical the inner product between our classical observables and then the second question once we have understood that there is such a L t will that generate a solution of the linear Boltzmann equation so as I said earlier in a different setting using Wigner or Hossimi functions but the the statements can be related in this way it's just if you like a slightly different topology of the convergence and the really interesting observation of Eng and Erdisch is that you get the linear Boltzmann equation where your collision kernel and the classical case was the differential cross-section of a single scatterer is given by this quantum mechanical collision kernel so the delta function here makes sure that you have elastic collisions so energy is preserved the kinetic incoming kinetic energy is equal to the outgoing kinetic energy and this here is the single scattering T matrix the T matrix if you like is a well-known object in scattering theory at a potential and it's related to the classical scattering S matrix in a very simple way and that's exactly what you would expect here that's what you would see if you just had one fixed potential and you would scatter at it you would get an expression like this right so it's again exactly like in the classical setting even though you have all these scatterers it's not trivial at all but in the Boltzmann grad limit you just should see each scatterer by itself the single scattering cross-section determining the dynamics right so and that's what I referred to earlier so basically what you see here now is that in the limit you get classical propagation until you had to hit a scatterer then you have quantum scattering and then you can see again classical propagation so it's a very beautiful mixture and separation of the classical and quantum regime in this model you don't have this in the weak coupling limit so this is the result in the weak coupling limit what you see is that instead of the full T matrix squared you see the first term in the Born approximation so again you can see this nice separation of classical propagation and scattering but only first order but it's more dense the gas in the weak coupling because the three parties are clear the collisions are much more frequent so it's this picture of fame in that case that's right nevertheless you still see the linear Boltzmann equation which is this is here you see it's sort of nicer because you have this classical intuition so you really see the free motion happening and this is the miracle of quantum mechanics that you see things that maybe from a classical point of view you don't really expect in the weak coupling limit why should it be free motion for such a long time but that's how it is okay this is now the answer to the question for the periodic setting so let's just take a letter cd on which we place our scatter exactly in the picture that I've showed you there is a very significant underlying assumption which I will explain that is what I call a generalized Berry-Tabo conjecture and I'm going to explain to you what this means so under this assumption we can show that indeed we have a family of limiting operators LT so that this convergence holds as well just like in the random case however the limit is not a solution to the linear Boltzmann equation so that might not be such a surprise because clearly the periodic setting is very different from the random setting and so I'm not expecting you to sort of be all shocked however it's important I think or very remarkable that we can use the same scaling limit as in the random situation and the process is very very non-trivial as I'll explain to you and just as a side remark we could prove this here unconditionally without the Berry-Tabo conjecture up to second order in lambda so there is an expansion which we can do it's all a bit messy but we couldn't go beyond that so we need the conjecture to actually identify the full limiting process which I'll now describe to you so before I do this let me tell you can you identify the limiting object or? Yes, so I'll talk about that and in order to do it I'll first tell you how the solution of the linear Boltzmann equation looks like and then I'll compare it with my limiting object because unfortunately I don't have such a nice description in terms of a macroscopic kinetic equation so you can write down the solution of the linear Boltzmann equation as a collision series where the kth term corresponds to k the term describing k minus 1 collisions so I don't know if we have any people online but I hope you can see if I write it really big so what happens here is that you can think of a random process that describes the solution of the linear Boltzmann equation that is a process where you fly for an exponentially distributed time then you change direction according to the collision kernel sigma the differential cross section you fly again by an exponential distributed time that's independent of whatever happened in the past and so on and so what I'm writing down now is how the corresponding density involves and separated out in the number of changes of directions that I see here so the zero collision term is simply free propagation because you don't collide so just you know move your particle x just along a straight line and you lose mass by an exponential factor that corresponds to the exponential clock that you see in the random process so the probability of having no collision decreases exponentially fast that's the same in the classical setting and in the quantum setting because both are solutions of the linear Boltzmann equation in the periodic classical setting remember I mentioned this we actually don't have this but a algebraic decay of this factor here it's not exponential but it's of the order of 1 over t squared or t cubed depending which type of collision you look at okay and then you can write down the expression for the k-1 collision term as an integral where you actually see the propagation and you see sort of your limiting random process given by these limiting probability densities rho lb here and they are simply a product between exponential decay so this corresponds to the exponential times for each of the paths so this is the time u1 here if you like time u2 time u3 they're all independent and exponentially distributed and the y's here are the vectors in which we travel the momenta and they're also random and as you can see we get here independent so this is essentially a mark of process where the momenta are just related by the differential cross-section so that's the classical setting which many of you know and have seen so let me now contrast this with our limiting process in the periodic quantum setting so again we write everything down as a collision series and the first term to our surprise is again given by the same expression as in the random setting and in contrast to the classical setting in the periodic setting where we have this algebraic tail we have here an exponential decay so that was a little bit of a surprise I thought that one sees some remnants of the classical setting but we're not really in this full semi-classic limit of course because we do quantum scattering so there's no inconsistency or paradox here and now the higher order term the k-1 collision term has again a similar structure as for the linear Boltzmann equation we still see the classical transport here in our observable remember a is like a density so the evolution is sort of backwards in time u, j are the flight times y, j are the momenta just as I've plotted it here and the question now is what are the row lm right and the row lm are our collision densities and they have this expression so we can write them as a product of several terms note here we still have this exponential distribution on the flight times however the flight times also appear in this density here there is now strong dependence as you'll see when I explain what this function glm is we still have the on-shell condition so that's all good and the these densities glm here so you see we summing over all glm so it looks more messy but this is the right structure of writing this down so glm are in fact matrix elements of this higher dimensional contour integral I'm just flashing this to you I'm not expecting you to digest all of this has a sort of resolvent structure here and it is simply an object that may remind you of the Borel transform of this limiting matrix W whose coefficients are given by again the scattering T matrix yeah so the scattering T matrix appears that also makes us very happy because that's the natural object you expect you want to see scattering but it's no longer a single scattering kernel that you see here everything's correlated for chi-1 collisions matrix that you need to sum over every scattering event knows about the other okay and that is the real reason behind this complication is because we are in a periodic setting and I hope I'll be able to explain this to you now there are some very beautiful simple formulas if you look for instance at the one collision term where we can relate our limiting densities rho ml to express them in terms of j-bessel functions and I've just written it down here there are papers in the quantum chaos literature particularly by Bogomolny and Girol who are across the airline and then Orsay who have found very similar formulas in diffractive systems but let me not dwell too much on this excuse me the G depends jointly on the flights you cannot split into no everything depends on each other and you see this is the formula so the times you appear here that's where they appear and the momenta appear in this sorry they appear here in this W matrix and they integrate over this so the way I think about this is a little bit like this is like a propagator this is like a potential in matrix form and k dimensions so the k-1 collision term is sort of like a projection onto a k-dimensional space and in fact if you read our paper you see we have a graph theoretical interpretation for this for each k term and there you have some dynamics going on a quantum dynamics and the absolute miracle that happens and I do want to say is we get a positive expression so it's not clear at all that these densities here should be positive okay because we're doing quantum mechanics and we're propagating forward in time backward in time we get cancellations and the reason why we get a positive term is the way we've packaged everything and just to say this the way this derivation works is that we see in our formulas that as you move you have a non-trivial one y3 y4 y5 a non-zero probability of seeing a momentum again that you've seen previously and the reason for this is the Bloch decomposition and I'll say a little bit more about this you effectively because of the periodic setting seeing with a certain probability a finite dimensional selection of momentum so you have a positive probability of seeing a momentum again that we've seen before and now we put them all together as one in the density you've seen here in our density so the y1 I would count y5 as y1 so I've summed already over those and packaged them together and that's how we get positivity if we just look at our expansion it's not clear at all that this should be a positive density the question so I mean usually when you do Bloch-Lockey transformation you have the wavelengths of the letters and you have inside the letter cell so there's no separation exactly because it disappears the letter cell disappears that's the non-triviality of the limit we're taking because we're zooming out we're taking a continuum limit and that's what a lot of the people who say it's easy forgot is that you're now going into a scaling limit where all your bands suddenly come together and start mixing up because of this particular scaling that we do I'll explain it yeah? are these coefficients on the rotation or do you still see the orientation of the letters? no we don't see the letters in fact if we start with any other letters Euclidean letters co-volume 1 so just from a normalization rotated or stretched anything you want to do to it any linear transformation the answer will be the same if we put another point inside the letters then the answer will be different right you get different settings but for any linear perturbation of the letters the limit process will be independent of it okay so here are the key steps of the proof the first is Floquet-Bloch decomposition where we reduce the dynamics to each Bloch subspace and we assume of course because in the end we need to integrate again over the Bloch momenta but we actually we can think of the Bloch momentum being fixed for a moment and we just look at each individual subspace of a given fixed Bloch momentum as long as it's generic so alpha has to be generic, random etc and then we use the Duhamel formula as we've seen and several lectures here there is no other way the key point of our work and also of Francois Gauls and Emmanuel Cagliotti's work was to not use Duhamel expansion but actually use the geometry of the trajectories that was absolutely crucial here there are no trajectories that I can see and we just need to do Duhamel and I'm not going to show you all those long formulas we control the error terms let me you just need to trust me there let me try to explain to you what really is the key idea that's new here and of course if you apply the Duhamel formula you see the unperturbed term appearing and if you iterate it you get these multiple integrals over the free propagator potential and now you need to understand therefore the spectrum of the free propagator and the really important point now is that we are in a block subspace where the free propagator behaves very very randomly that's the assumption that's key to our theorem so the point is that the eigen phases of the free propagator in other words the eigen values of the Laplace are given by this formula they are essentially m plus alpha squared so for example in two dimensions this would just be m1 minus alpha1 squared plus m2 minus alpha2 squared just values of a quadratic form and we need to work with those and what I'm going to tell you is that these guys behave random in fact what I'm going to tell you and I I'm sure let me not give the punchline away too early so this is now our set this is the spectrum the unperturbed spectrum if you put those vectors into the quadratic form you can also each eigenvector of the Laplacian corresponds to a shifted lattice point in this set and so what one can show now is that the n minus first collision term which comes from the Duhamel expansion kth order term in the Duhamel expansion can be written in this form it's some function evaluated at the square length of those lattice points the first n plus 1 and then you have the corresponding momenta if you like the things defining the sort of labeling the eigenfunctions and they're here and they appear with a factor of r so remember and maybe I didn't highlight this enough I'm in the limit when h is equal to r so r is now my wavelength and that's why the r appears with a momentum remember the definition of our scaling of the pseudo differential operators had the h times momentum so that's exactly this and here you see a combined scaling that comes from a combination of h bar and mean free path lengths so this function think of it as having compact support even though it doesn't it's a bit more singular than that and we're summing over all those lattice points non-consecutive here means and you've seen this also before in some of the lectures that just consecutive elements shouldn't be equal so p j should not be equal to pj plus 1 and what is also important here I said the function should think of it as having compact support not quite in the first coordinates they're invariant under shifting each value here by the same so they're translation and variance so in other words this function only depends on the differences between the pj now in fact you can identify such a function with so-called endpoint correlation functions of the values generated by these norm squared and truncated by by the truncation that's given by r so as r tends to 0 you take more and more values but you also restrict more and more the summation range so that's exactly an endpoint correlation function of this process and what I'm saying here is that our assumption is that we in our considerations can replace the lattice p the shifted lattice shifted by alpha we can replace it by a Poisson point process in rd and you will tell me that I'm crazy right who thinks I'm crazy please hands up you think I'm crazy I knew that before but who thinks I'm not crazy please come on nobody thinks I'm Manfred you think I'm not crazy thank you so this is a really important point and I'm gonna just show you why I'm not crazy because I don't want to leave you with a wrong impression and this is in fact something I've worked on over 20 years ago when I studied the Barry Tabor conjecture also something that Lebowitz and Blecher were very interested in over 20 years ago in connection with one of the basic questions in quantum chaos and the point is the following so here's your lattice points and the question you're really asking is the following so you have your lattice points you look at a large annulus so in our case centered at alpha in our case the annulus has radius 1 over r and it has width given by this r to the 2 minus d sorry it has and then there is a square here so I probably should normalize by 2 so let me say it has an area related to that quantity so the strip becomes smaller and smaller it's normalized to have in two dimensions constant area so you'd expect only finitely many points to be in there so if you now take the radius to infinity i.e. little r to 0 what you are conjecturing here is that the probability for a random radius to find k points in that little strip is given by the Poisson distribution that's a major open conjecture what I could prove over 20 years ago is that the two-point function actually is consistent with this conjecture and that's what we can use to do some steps rigorously up to second order perturbation theory there is an additional feature here that's very important and that we need not just the radii to be sort of Poisson distributed but independently on the direction so we need that if we look at a small sector here and we just look at the points there we also want to see the Poisson distribution so in other words in this limit really what it means that you have a sequence of radii that are Poisson distributed and that the arguments should be independent converging to independent random variables so in other words the whole lattice should converge to a d-dimensional Poisson point process in this particular scaling limit of course if you just look at the lattice there will never be a Poisson point process but it is if you look with respect to these particular strangely scaled test sets and so just to convince you that's what I'm showing you now on the next slides is I'm looking at the radii squared by the way in two dimensions they form a sequence of density one so that's a good normalization and my claim is that these converge to a Poisson point process one-dimensional and that these become independent and so if you have the radii given by Poisson point process in R plus and the directions becoming independent then that's the same as saying that in the plane these are the sort of radial coordinates polar coordinates then that would be a two-dimensional Poisson point process so let me just show you this okay so here's just the picture right and now you will say yes I am crazy because this doesn't look at all random right there's a structure in here this is for a given radius R R is one over little R on my picture there and you see a very clear structure of things but remember we wanted to take R to infinity and so I'm now just taking points in a strip right out there so a strip of radius R with a certain window that is scaled exactly in the right way and so it does look more random as you increase R another test you can do is you can just look at the consecutive spacing spacings between the sequences of the length of my lattice point squared that I see just order them they're an ordered sequence density one and you just now look at the distribution of gaps and I'm looking here at the joint distribution of gaps the gaps are here you know this would be a very large gap this would be a very small gap and here we have the angles actually don't ask me why this starts at one that must be a typo here and here is the histogram and so you remember one of the properties of a Poisson point process is that the gap distribution between consecutive events is exponentially distributed so we see this here this is in fact the histogram of the joint distribution also for the associated angle so each gap has an angle associated with it and you see that it is jointly exponential so that's numerical evidence if you like for our conjecture and as I mentioned earlier almost, yeah almost 20 years ago I proved rigorously that if alpha is in fact diophantine so alpha remember is our Blochvector right if it's diophantine you can prove that the two point correlation function converges to that of a Poisson point process in an arbitrary dimension and in fact the phantan alpha for which I could prove in fact form a set of full measure and in fact you can also prove the convergence not just for fixed generic alpha but also for a random alpha on expectation and that enabled us in our first paper with Jory Griffin to actually prove that what I've showed you here converges up to second order in perturbation theory rigorously without this assumption and here are the open problems I have no idea how to prove this for higher order correlation functions but I'm not giving up so this is on my list on my to-do list and it could be that some other scaling limits where we choose the wavelength to be much smaller than the radius or the other way around that some of these considerations actually can be made rigorous of course when the wavelength is smaller than the radius we are in the semi-classical regime where also the scattering will be semi-classical no longer quantum and when the wavelength is larger than r we are in a classical homogenization setting where people have developed other techniques to study problems of this form so my idea would be to keep those two close because if you just fix the wavelength and you take r to zero then you are in the sort of classical setting where one understands these problems that's of course sort of more in first domain and we don't even need to do the weak scaling weak coupling limit there then it's really easy it's just you know classical solid state physics and also yeah you could look at the limit of delta scatterers where the wavelength is scaled in a particular way with h-bar and then of course you know extension to quasi-crystals and other scatterer configurations is a good one the most pressing question for you and I really invite you to think about that and tell me if you have any idea is we don't even know what is the long time the hydrodynamic limit of our limiting process it's sort of very explicit but because it has this matrix form I don't know whether it will be diffusive super diffusive or even ballistic so I haven't really thought much about it except to stare at it and it is a non-trivial question I think and I think it would be a beautiful result to show how how the long time limit behaves here okay well with that again thank you very much also to the audience for all coming here being live on screen and if there's anyone online thank you for zooming in thank you questions why do you expect super diffusion and what do you know about the time decay so why do I expect super diffusion did I say I expect super diffusion I put the question mark the only reason why I expect super diffusion is because that's what we see in the classical setting that's a theorem Valentot and I proved for the classical limits of the kinetic limit in the Boltzmann-Gradt limit of the periodic Lorentz gas I somehow don't expect the diffusive limits just because there are such strong correlations in the quantum dynamics so let me just say I put a question mark here right it's not a conjecture it's a question and I would expect super diffusive or even maybe something ballistic of course that would be a bit boring I would be disappointed if it's ballistic because I'd like to really say that if you first go on the Boltzmann-Gradt limit you see something different from the standard solid state sort of philosophy that you know periodic potential is ballistic so I'd love to see some diffusive component here I understand for the weak coupling in periodic case nothing is known no I mean for the weak coupling you don't even need to do the weak coupling you can solve the problem completely you have the band structures you do the classical solid state and if you like then you could take lambda to zero but you don't need to because you already have your solutions right so in the periodic case that's where the easy part is in the hard part is when you do the actual low density Boltzmann-Gradt limit that's my point and that's people have not appreciated that I think previously Mario in the classical weak coupling is there any yeah in the classical you have fokker-palanka-landau or linear fokker-palanka-landau there are very old result by durable label and I don't remember Goldstein for two dimension but there was a previous result by someone else I don't remember but I can give you the I think castan-papanigola castan-papanigola in three dimensions and they are one in castan-papanigola don't do it in periodic it's not periodic so what I asked in the periodic case yeah that's what I know I don't know what happened in the periodic case I don't think before the weak coupling of course castan-papanigola but that's serendipity so it's a good question yeah I have another question actually so you mentioned that in the limit you can get this solution as a series because looking at the hydrodynamic it's just an expansion no exactly and I don't have an equation the way the structure and you can see it in our paper is that for each term in the expansion we can define a graph on which in a sense you have a quantum dynamics but going backwards and forwards in time and only if you order terms in the way I've done you get a positive density so I think we could write down a sort of effective equation for each term K but not for the global object that I haven't found and you of course wanted for the global object because so I don't have it yeah like a generalized Boltzmann equation I don't have it I was looking for it but I don't have it are there other questions if not I think we can sign against speakers and have a break