 So, after two quantum talks, this will be about classical Lorentz gas. So those of you who don't have the quantum intuition may also easily follow, I hope. And in my introductory slides, you will see a lot of overlap with Jens' introductory slides. It was, we didn't coordinate our slides, but it's not a problem, I hope. So it's about the Lorentz gas, and as Jens presented the historic background, I will not speak much about the history, but what the two basic models here are, one is the periodic, the other one is the random. What is the Lorentz gas? You have infinitely heavy particles, which will put scatterers put in some points, center on some points in space, in Rd, and a light particle or a particle of mass 1 traveling between them according to Newton's law. So flying freely with constant speed and between two consecutive scattering and scattering of elastically from the infinite mass scatterers, which are placed there. And the two basic, so there are some notation fixed here, so I will denote by R the radius of the scatterers, and what I'm interested in is the trajectory, of course. So you have to introduce some randomness if you are a probabilist and you want to understand limiting distributions. In the periodic case, as the scatterers are completely deterministically put in a periodic arrangement, you choose a random initial direction, say, according to uniform distribution of possible velocities, of possible directions, and in the random case do the same, but you have an extra randomness as your scatterers are placed according to some random point process, which throughout my talk will be a Poisson point process, so uncorrelated points. And what I'm interested in is the trajectory, large scale limit of the large scale asymptotic description of the trajectory. This is a very difficult and by now classical problem. I think I told you everything about this. I will have a mixture of hand written and printed slides because I'm not able to produce nice pictures. I had some hand prepared pictures and I inserted them. OK, so the big question here is if you initiate, so I told you what the dynamics is, these are two different kinds of things, so they are two different problems, and you introduce some randomness in the initial condition, say direction of the initial velocity, and the big question here is whether the trajectory, the x of t is the trajectory of the flying particle, whether the trajectory obeys a central limit theorem, or if you are even more ambitious and invariance principle under diffusives scaling, how do I do it? Yeah, like that. So what you see here is a diffusive scaling, but it may be the case as Jens emphasized in his last slide that other type of scaling might be there or even other type of limiting laws maybe. And there is a huge history here. Yes, it started with Lorentz's paper back in 1905, but the mathematically rigorous analysis of the problem in the periodic case, I think it's rooted in these works of Jakov Grigoryevicz-Sinai and Ljonja Bujimović, back in the late 70s, early 80s, and the best results are listed here about the periodic case. So if you assume finite horizon, I didn't tell you what finite horizon means, I tell you now, so what you see here is not finite horizon because you have arbitrary long, actually deterministically infinitely long free trajectories. If you don't allow that, you can imagine easily periodic configuration where you place more scatterers in some places that are still periodically in such a way that you don't allow infinite horizon, you don't allow unboundedly long, long free flights. Now, Bujimović-Sinai, as you all know well by now, proved back in, so this is a groundbreaking paper back in 1980, in two dimensions they proved the central limit. They proved many other things and as a consequence they proved the central limit theorem with standard diffusive scaling. I think this is really one of the great results, I quote here in the historic background, and what they do is reduction of the problem, so they don't treat it as a probabilistic problem but as a dynamical problem, so the periodic case can be reduced to hyperbolic dynamics, factorize to one period and look at the dynamical system there and they developed enormous, strong technical tools for analyzing the dynamical system side of the problem. So that was two dimensions, as you see quite some time later, the three-dimensional problem was settled in the finite horizon case. There are some assumptions here, so the two-dimension is absolutely settled, the three-dimension, there are some assumptions made, but certainly the same is true, so invariance principle holds. Now when the horizon is infinite, that means the picture you showed before and what Jens also mentioned in his talk, then there was a conjecture formulated by Pavel Blecher as you see when, namely that you should overscale, you should scale slightly more than the central limit theorem and this is due, Blecher's argument is essentially the same, but of course much more involved than what we learn in probability theory when we prove central limit theorem on, how do you call it, just on the borderline of the domain of attraction of the normal distribution. So you need something like that, this was not a proof, this conjecture was settled in two dimensions, only in two dimensions by Domokor Starrs and Tomas Vario in a paper and let me just advertise that in three dimensions or more is absolutely wide open. I speak about no Boltzmann grad, so here you just take as it is, take the problem as it is and what I wrote there. So it's widely open in three and more dimensions, as far as I know. Now the random, the random scatterers, when you speak about the random scatterers you expect that the problem is probabilistically, I mean this problem I speak about the central limit theorem, must be somehow easier because we have more randomness in it, but nevertheless it turned out that it's not. So why, because there are no methods. So in principle philosophically might be more easy because there is more randomness, but there are no mathematical methods to treat it. So that's what I will advertise in this lecture. And in absolutely holy grail here is take the problem as I stated with Poisson scatterers, Poisson distributed according to Poisson distribution assume that rho is density, I'm not sure I told you, rho is density, or is the intensity of the Poisson point process, assume that r to the d times rho is less than a critical value, this is a percolation argument, so if that's too high then with 1 you will simply be surrounded, each point will be surrounded by overlapping scatterer, there is no way of diffusion. So let's assume that we are in that regime and this is a number, which is not computable but proved to exist, such that if this product is less than a critical value, than a percolation critical value, conditionally on not being surrounded, fully surrounded, prove the central limit theorem. This is enormous certificate. This is indeed a holy grail, I don't think anybody has an idea how to really do it. Now there are two sources of randomness, as I told you already, namely the initial placement of the scatterer and the choice of the initial velocity. The direction of the initial velocity I will choose as I told you Poisson point and uniformly distributed directions, and there are, Jens mentioned two ways of proving, or two settings for a central limit theorem, I put in between a third one, namely the first one is what we call an yield, namely look at this process as a random process but the randomness coming from both sites and look at the distribution according to that and when you prove a limit theorem according to that distribution, the quenched, let me go to the fully quenched, which means, as Jens said, for almost all realizations of the scatterers, randomly according to the initial direction of the velocity and in between there is physically, okay, I think it's not negligibly important one, what is in between the two indeed logically, what I call semi quenched, namely when you prove in probability with respect to the distribution of the, with respect to the scatterers and I want to emphasize that this is physically relevant, the other one is, the fully quenched of course is stronger, but in all this context of random walking, random environment, diffusion in random environment and so on, when people prove central limit theorems about martingale approximations and this type of techniques, this is the type of result you got. Okay, does it grow? Now the Boltzmann grad limit, as you all know, but I have these preparatory slides. The Boltzmann grad limit is the limit when you have, in both cases, is the limit when you get the, it's an elementary computation, the free flight lengths, the typical free flight lengths for the one, right? And it's an elementary computation to see that the limit and this is what I will consider later, let there are more than one way to formulate it because you can also play with scaling time and space, but my formulation is that let R go to zero, density go to infinity in such a way that this product goes to say one or anything and that will be the limit I consider. And as it was already I think mentioned, I'm not sure whether, but yes, yes, yes mentioned, so the random case, now I changed order for some reason, the random case, I changed order because historically now things come in this order, but the random case was treated in Gallavotti in this paper of Gallavotti and later by Herbert Schwan in some more general setting with more general point processes with sufficient mixing, namely they prove in the annealed setting, in the annealed setting they proved, this is very important how you formulate it, so let me be very precise here. Fix a capital T, a finite whatever length, so no matter how large but finite capital T, look at the physical process up to that time, take this limit and what they prove is that under this limit, as a stochastic process, this process will converge to a Markovian jump process, Markovian flight process, as it was described by in Jens's talk, namely fly exponential, so in plain words this limiting process is a Markovian, it's essentially a random walk, independent identically distributed exponential fly, exponential in times and between and at these exponential times the velocity is changed according to a scattering kernel, which in the case of heart spheres, as I speak about heart spheres, we'll see later that this could be changed, but let's speak about heart spheres, as it appears in one of Jens's slides. So the mind that there is an explicit expression here and three dimensions is somewhat special. Now that was the periodic, so goes back to 69 say, 71 when it was actually published and the periodic case as again, I'm sorry I overlapped with Jens, but we didn't coordinate our slides, I'm sorry about this. Ok, this is a long history and I apologize, I will not have time and I didn't have space here to list all names, so starting with Kaliot and Gauss' result in two dimensions and ending with Markov and Stenberg's result in arbitrary three and same three dimensions, they prove formally, formally a similar statement. I mean that in the Boltzmann grad limit there is a limit of the process up to any finite time, but the process on the right hand side is not anymore this simple to describe. I would call it a hidden Markov flight process because in the background there is a Markov process which I will not describe now, so you have to take into account some hidden Markov variables and in terms of that it can be described, but what's very important is that the flight times have also heavy tails which are explicit heavy tails depending on dimension, but heavy tails. So this is the Boltzmann grad, this is the Boltzmann grad and then once you have the Boltzmann grad you can think about the two steps limit because in the end we are interested about diffusion. In the end we are interested about diffusion. The two steps in the random case is essentially straightforward, but it's straightforward because Donskar did the work for us. Munro Donskar did the work for us or Erdos and Katz and Munro Donskar did the work for us, namely that once you have this random walk process in the limit, take a second limit and apply Donskar's theorem, you will get diffusion and that's what we wanted. Of course we want a bit more. In the periodic case this is slightly more subtle because in the Boltzmann grad the limiting process you get is not a random walk in the sense of sums of independent or essentially independent random variables. You have to work a little bit more for it and we have a paper with Jens joint paper in which we prove and in any dimension mind that there was a remark there before that without Boltzmann grad it settled only in two dimensions that in any dimensions we prove the invariance principle and with super diffusive scaling the super diffusivity comes from the long flights. So this much about historic background and now let's see what can we do better and I will be interested in the random case, in the Poisson case because in the random case we do not have any result just the diffusive scaling. So what I mean here could we find some result which interpolates between what I call the Holy Grail and what is well understood the two steps. In the periodic just and this is the last time I mentioned the periodic case in my talk in the periodic case the same question in d greater than 3 or dimension equals 3 would be interesting because in two dimensions we do have the fixed scatterers result in that Bleyhardt conjecture proved in the two-dimensional case by Sassen value but nevertheless this intermediate scale there is also interested but in three dimensions would be absolutely interesting. When you mentioned the two step limit does it mean that you have to do two limit separately or that you can just put them together? No, two steps is two steps prove first Boltzmann grad which is done and after Boltzmann grad do the scaling limit and due to Donskers invariance principle a little bit you have to work on it because first limit you have it on a time which is sufficiently large in order that you can really do. But for any time for any fixed time you have it that's a formal thing you can define convergence in infinite time scale just as if it holds on any time it's a weaker convergence but you have it for any time take a t long time scale and divide by square root of t and let t go to infinity and that's the invariance principle. So it's done, we understand it's Donskers and here is the interpolation and this is the first version of the theorem so this is when the result I'm going to I concentrate my talk on this is a paper which was published I always where you see in dates is the date of publication of the paper so with Chris Lutsko because when we were jointly on this he was a PhD student in Bristol he was actually a PhD student of Yance he's not mine and we proved exactly this type of result we proved exactly this type of result in three dimensions so take the random Lorentz gas now let's be very precise random Lorentz gas with Poisson point distribution of scatterers spherical scatterers take the Boltzmann grand limit as I said and along the limit and this is what you asked I think Lore along the limit take a time horizon diverging to infinity and try to prove the scaling in this time horizon now this is more difficult than the two steps limit because the longer the time horizon the longer the memory and the more you have to care about and the result is that if your time horizon is such that certainly diverges to infinity but not too fast it diverges as you see there 1 over r log r squared then the invariance principle holds now let's stop for a moment before I go on namely I expect if you wish I may say I conjecture that this type of result must hold up to any time horizon so take any t sub r which increases to infinity by exponential if I x to the one exponential of 1 over r or even faster I'm pretty sure that this type of result should hold I'm able to prove it only up to this time and I have good reason to be able to do it only up to that time you will see later and another remark I should have written about this another but that's too much to say if result what I call the holy grail were true if it were true then in the limit you would see an invariance principle with some sigma squared with some variance the variance would depend on r now let r go to 0 and that variance you get there should converge to the same variance so there should be some sort of continuity taking various limits but this is indeed in the domain of dreams because we don't have any going beyond this time horizon is not fully I don't know how to do it but that's a good challenge because it's probably if one is clever enough to invent something truly new ok, here are some remarks up to a time horizon of order 1 over r which is still infinite time horizon the problem is purely probabilistic there is no dynamics in it and that's what I will show you it's in this sense it's simpler first of all it reproduces Galavoti and Spawn and more than that because it goes beyond that longer time horizon you don't see it from this formulation you don't see why it reproduces Galavoti and Spawn but from the next formulation you will see it because it's based on a coupling now so this is purely probabilistic I don't say it's trivial probabilistic but it's probabilistic from rs between these two time scales geometry will also so geometry or dynamics call it as you wish will also play your role and now as the theorem is formulated in the paper and here is exactly the setting I told you 3 dimensions spherical hard cores it can be extended to any dimension I know how to extend it would be enormously lot of work it's not the case that I wrote down that I see that it can be extended extended to any dimension up to time horizon time set power of logarithm or divided by power of logarithm so this two here is one yes this two here is three minus one and in two dimensions this would be just that in two dimensions the method doesn't give more than the pure probabilistic statement you will see why this is important because this tells I will tell you the method that this tells you the limitation of the method physically should be true for beyond but the method doesn't give better and it can be also extended to other short range interactions under some conditions so I use the fact that we are in three dimensions and we are hard cores but I will show you how to go how to extend when I say it can be extended I don't mean that I have a paper written down so I will show you exactly what to do by short range you mean that there are finite range that's important in the method just to tell you it's an alpha the alpha is always the same probably I would be able to compute it I didn't take the I don't know I can't answer you now but we will see how it comes out sorry in the cross section you mentioned that the dimension 3 is a measurement uniform but it is also never an advantage to change the direction it doesn't become better by having this sort of biased random walk no that's what matters is that it's renewed without memory but it doesn't this bias random walk doesn't help the exit distribution is not uniform on the sphere it doesn't help you to have less memory in some dimension no, I don't think it helps I don't think it helps it makes it technically more difficult I will tell you exactly at the later slide why but it's not essential difficulty it's difficulty it would take another 10 pages to extend the argument but I will tell you exactly what it would be very explicit about this good and the idea the idea of proof is coupling I come partly from probability background so that's what I like coupling coupling means what realizing you want to prove something about the process sarcastic process so you understand another one much better try to realize them on the same probability space in such a way that you can learn about the easier one something that's fun ok, and the coupling you don't see yet here what the coupling is you will see on the next slide that let's assume so that you see the claim of what kind of coupling I can realize namely take the Markovian flight process of course the Markovian flight process essentially a random walk I understand it perfectly well I know everything about it and denote by you its velocity process that will be just a jump process ok, you understand what it will be the velocities in exponential time intervals and the other one be the Lorentz process that's mind as the physical mechanical process which for some reason I call it the Lorentz exploration process and you will see why but that's the Lorentz that's a physical, x is a physical process and denote the velocity of the physical process by vt but the randomness I think about the physical process as an annealed process so the randomness comes from everywhere and I realize it in a sense which for the probabilities maybe but others also must understand so it's constructed from y so once y is constructed how is y constructed give me a bag of independent exponentially distributed random variables and the bag of independent random variables and from there I construct the process y once that is given I can sequentially construct the process xt in such a way that it's measurable with respect to the sigma so I don't have to look forward in time just looking backward in time for the y process I know exactly what happened with the x process up to that time right and the idea of construction will be that this u and v, the two velocities try to stay parallel all the time try to stay x and y cannot stay parallel once they depart they are away but you try to keep the two velocities parallel this will not work for all time sometimes they have to do different things because there are two different processes but what I claim is that these mismatches very, with frequency of order r r is the radius which is a small parameter and after mismatch they are recoupled by recoupled I mean that they are put to the same value after times of order 1 so these are the two things if I can do that I tell you how I do that you don't see it yet if I can do these two things then here is some hand waving arguments of course you don't see the logarithm for example if this is the case that means that up to time of order 1 over r with high probability nothing different happens so the two stay together and here is the proof of the up to time 1 over r of course once you have the construction right and from time 1 over r you will start seeing the differences they will depart, they will do different things occasionally but if I'm right that in what I'm saying then after mismatch in order 1 time they come back and you try to evaluate this maximum you try to evaluate this maximum of the difference this is capital T here this is divided by square root of capital T of course yes this is an elementary just put the absolute values inside and you get this expression as an upper bound divide by square root r goes to zero which means T is not bigger than 1 over r squared you don't see the logarithms you don't see corrections but the idea and you also see that I do the most stupid thing here because if I can do the random processes when I put inside the absolute value from outside then I switch from fluctuations to very rude computation sort of law of large numbers type of computation so if I was more clever than this I could keep outside the absolute values and here I could probably have a square root of T instead of a T in the integral by the way I didn't tell you why this integral is of order T because T times r because you integrate from zero to T you have all together r times T mismatches if I'm right and each mismatch time is of order 1 so that's why you have this very simple upper bound but this is just the back of the envelope and here is the in plain words I'm not writing formulas because I think you understand better from plain words ok, what I'm doing let me go back to the yeah, I don't go back to the picture so when I explained you the process I put down the scatterers from start all scatterers were there and I started the process now the annealed picture in the annealed picture you don't need all the scatterers at place from start as you go along the way you explore your and you just put a scatterer when you see a new scatterer you put it there you keep it there forever so what I'm doing is xt and so the process construction of the environment are done together and the explored areas are recorded and not changed anymore of course because I want to keep the memory what is explored area not only the scatterers themselves but the corridors the tubes the tubes of radius are around the past trajectory these are recorded so whenever you come back to these areas you must do otherwise when you are outside and you explore virgin area then the process the process behaves like a random process so what I want what I have to control is what happens when you are in the overlapping times and that's written here but of course things can go wrong things can go wrong so it's not that simple what can go wrong I'm not sure can you see this slide well yeah so on the left hand side you see the the yellow thing the yellow trajectory is the Markovian is the random trajectory and I construct the blue one which is the physical which is the mechanical trajectory so the yellow one sorry I did something wrong here and I don't know what take some time till it comes up good so this is the construction of yellow so fly exponentially exponentially long time change your velocity renew your velocity independently that's in three dimensions renew your velocity independently fly exponentially you try to mimic that fly I just started my job so I don't have any information about the environment I sweep the environment but due to the fact that my environment is Poisson the flight time this is exactly the computation you do when you compute for the Boltzmann grad limit what you should do the flight time is exponential when you change velocity you place your scattering that's there forever fly on again change velocity place your scatterer fly on place your scatterer fly on at this moment I know that I have these three scatterers I don't know the fourth one yet fly on and this guy didn't get yet so the Markovian guy just doesn't remember what happened here so just fly so until that time but the mechanical guy knows that there is a scatterer scatters away as it should or she should flies as long till the next exponential clock rings and then follow the instruction the instruction is this one get parallel of course it may happen that you can't do it better because you are still when the new instruction comes you are still in wrong area these are the technical parts of the proof that this happens with very small probability this is one thing which can go wrong the other thing which can go wrong here is another trajectory fly here you get the guy got an instruction to change velocity from the exponential clock but the non Markovian guy knows that she cannot change her velocity here because it's in the tube so flies on till the new that long time till the new instruction comes and assuming that nothing that there is no other memory element there follow it and get parallel of course when I say there is no other memory element these are difficult proofs or more or less difficult proofs that's the coupling I explained you what the coupling was and this can be done by formulas it's not long to write down by formulas okay and here is the theorem now the previous theorem was a consequence of this one that this is the precise formulation of the theorem the precise formulation of the theorem is that the main theorem in that paper is that in three we do it in three dimensions because we use the fact that we are in three dimensions take the Boltzmann gradient up to time of order of small order one over r the two processes are the same up to times of order what you see there the two processes are not anymore the same but exactly what I told you can be done namely in probability that the scaled maximum is greater than the delta so if you do the proper scaling you get exactly what you need for the invariance principle why because I have the invariance principle for the probabilistic object so this is the theorem good and here is a very simple little lemma which explains a lot this is a probabilistic lemma which explains a lot the powers we have there and all that this what you see here and I'm sorry I hope you can see it not quite this is d-1 log r to the k r to the d-1 log r and this is r to the d-1 that's not visible anyway ok so this is the yellow guy is the probabilistic object it's a random walk, it's a random flight process Markovian random flight process and here is the following start from that point make k at least k scatterings maybe more but not less by scattering I mean change velocity at least k times and the event of the probability of which I ask is that after this, after at least k after k or more scatterings the guy comes back in the r neighborhood of the starting or the 3r neighborhood of the starting point this is a probabilistic problem you can give it to maybe some effort a good probability student or PhD student can compute it it's not totally trivial because there are some green function computations in 8 and you have to under, it's analysis and the statement is the following we are in 3 and more dimensions that's very important that this is very in 3 and more dimensions in 2 dimensions I have to do something different up to k not bigger than dimension minus 2 this probability is bounded above by r to the k in 3 dimensions only one single k is included here k equals 1 at k equals d minus 1 you will have as it comes here r to d minus 1 but you will already have a logarithmic correction and logarithmic correction with this power of the logarithm what you see these are precise powers and after if I ask about 100 collisions in 3 dimensions you don't gain more you don't gain more as for 3 collisions so the order of magnitude of coming back to the r neighborhood after 100 collisions will be just smaller constant but still r cubed and this is important let me tell you why this tells you how to prove the theorem and also tells you what the limitations of the theorem are because my proof maybe I give you the proof and I come back to this later I mean I give you the idea of the proof I don't give you formulas there are computational parts what happens I jumped yes ok and this is the next one sorry before going to the next one the first half of the theorem follows from this lemma because that's that's only probability you have from this lemma it's exactly half at least that the first bad thing doesn't happen up to time of order r is of negligible probability follows because simply you don't come back so the y process doesn't come back the x process by construction is equal to the y process I didn't say anything about the second type of wrong of this shadow in the tube but that's just by a time reversal it's the same so if you think about it a little bit reverse time and the second type of bad events are the time reversed first time of bad events and let's go and this proves yes that's what written there this disarguent proves up to time of 1 over r now up to also this lemma tells you how to try to prove what is the good strategy to prove to go beyond to go to longer time scales on longer time scales you will see recollitions you will see bad events, you will see mismatches you have to control them but if you think a little bit about that lemma you will easily convince yourself that the mismatches or bad events which may occur up to a time scale of 1 over r squared possibly with logarithmic correction will be only what I call direct recollitions and direct shadowings by direct recollision I mean recollision with the last scatterer it will not happen it seems to lie down formally that of course it needs a proof because this is just a guess based on the probabilistic arguments but up to time of order r squared say r squared divided by logar squared after times of order 1 over r 1 over r squared times 1 over logar squared that's the correct thing what I'm saying it will not happen that you recollide the physical object will recollide with a scatterer which was seen more than not just in the immediate past needs a proof of course and you will see how I prove it but this is what you expect you expect only direct recollisions what you see here so you bump between the two scatterers for number of times leave and that's it or direct shadowings means that there is a sharp collision here and at this time point it gets the instruction that it should do something but it's not allowed to do but by direct I mean that this is not in the tunnel of some older part of the past and moreover what I call it xi 2 that means after the first collision the length of the flight in this case so the distance between the two scatterers and here this flight this will be small order of one so that's easy to see or easy to convince yourself not to prove to convince yourself from the probabilistic object that this should be what you should fight against this how to fight against this how to fight against this I define an intermediate which I call a myopic object the myopic object the myopic object first of all doesn't take into account older collision so you start to follow to make the construction as I did before but erase those scatterers from the picture which happened and those tunnels which happened more than one scattering before that's one thing and also erase those so if after a scattering if after a scattering a flight longer than one occurred then you can erase that forget about that scatterer you can easily define a process like this you can define a process which will not be Markovian and will not be constructed by independent parts immediately but if you step backwards to and look at it then you can see that you can build it up from independent steps namely break down the size of the successive flight times and there are successive new velocities in there but I didn't write break up the sequence of flight times into blocks which are separated by 2 plus 2 long flights by long I mean longer than one so the separating parts are you had two long flights just before and two long flights ahead and you easily see that these blocks will be fully independent these are of random lengths these will be of random lengths but you can write down independent distribution of these guys such that if you put them together this will be just the sequence of I mean each block is a revolution with a direct revolution string Direct Recollisions occur so these safety safety zones ensure that Direct Recollisions occur inside in the interval may occur Direct Recollisions but these are broken up into independent legs like this and now the problem breaks down to two steps namely prove that and by the way the length of such an object is exponentially tight it's just elementary probability actually so these are of random lengths but exponentially decaying probability of of the legs so these are independent legs construct the process Z which I told you within these legs so the Z process will be a sort of random walk a very complicated random walk but independent steps put together right and now comes two elements of the proof namely that within one leg within one leg so take just this leg this is a finite sequence of exponentials and independent new velocities construct the X process the physical process and construct this myopic process these two will be identical with overwhelming probability this is difficult, this is the geometric part the geometric or dynamical part of the proof is hidden in this is in this step Yes, yes, yes, yes and I knew that there was an error, I didn't find I knew that there was, I knew from when I came from Budapest that I have to correct something when I looked at it yesterday I didn't find what and you are right that's a Z, thank you so that X and Z are the same with that probability you see there and you see the log squared right, I cannot do better Sergio asked me whether it is indeed necessary the log squared that isn't enough log I don't know, I could do it with log squared may easily be the case that log is enough but I was not able to do better and the other one is a probabilistic part again because Z is built up the big Z process is built up by independent steps like this so do green function estimates for this one it's not an ordinary random book, it's ordinary in the sense that it's built up of independent step but you have to use a lot of things so that two of these two of these so if you construct the Z if you construct the Z process out of these independent legs there is no interference between different legs that's a probabilistic statement and within one leg X and Z don't differ this is a probabilistic and dynamical statement difficult now, I don't want to tell it's hard, okay it's not straightforward no, I don't go to the geometry now but I want to tell you I want to tell you the limitation the limitation, I told you that you can't do better in two dimensions you can't do better than this in two dimensions you can't do better than than this and if you are willing to prove but I don't think many care about Lorentz gas in four dimensions in higher dimensions you couldn't do it better than d-1 and as I told you it comes from this level it comes from is that up to up to I mean reciprocal of that time maybe with some more logarithmic corrections but up to reciprocal of that time there are finitely many recollision patterns you have to control finitely many recollision patterns you have to control but beyond beyond reciprocal of that time time scales reciprocal of that all complex recollision patterns come in with the same probability different constants in front but in same order of probability and this type of arguments I told you now don't work so you have to do something else so this is as I didn't tell you anything about the geometric argument I still have some five minutes or something I want to I might come back to the geometric argument is longish and I'm not sure in five minutes I can give you sufficient detail to take away something so I'm not going to do this now but ok what are the further extensions of course I don't know in other I'm sure in other cultures other languages there are things like that what we say in Hungarian once you have a hammer you think everything is a nail so once we have this method we can do many other things and we do many other things or we could do many other things some are done namely the first outlook is about what can we do with other type of interaction ok in three dimensions this was very special in three dimensions because it was just renewal of velocity so I could really from start break up everything in independent that's how I started if this doesn't happen but you have Dublin's condition I don't know whether this community is well aware of what Dublin's condition is or Wolfgang Dublin's ingenious simple thing namely that if you have a lower bound on the distribution on the kernel of a Markov chain then you can do everything what you can do in independent namely you can break up the Markov chain into independent leks and then everything can be done if Dublin's condition holds for the scattering kernel then rather than having independent steps we have from start independent leks of random but exponentially tight lengths with much more work but exactly the same thing can be done and as an example the easiest to name example is the air and fast wind tree model which is in a different paper done now this is simpler than the spherical from the geometric point of view because the geometry is much simpler on the other hand there is this extra difficulty of not having independence of applying the Dublin's trick of course this also will work for dimensions greater than 3 not for dimension 2 because in dimension 2 you don't have Dublin if you don't have Dublin but you have Dublin for the sorry Dublin for the second convolution power then you cannot break up into then Dublin's trick is more complex because you can't break up your markov chain into independent leks but into you recognize something but into independent one dependent step so more probability is needed but it's doable and here comes something that I wanted to mention there is the magnetic I am inspired by this paper of Alessia Chiara and Sergio recent paper in which they prove a conjecture of construction made rigorous a construction made rigorous so they prove the following they prove the following take the two dimensional magnetic Lorentz gas this picture is not done by Chris Lutsko so that's why it's so beautiful so take the two dimensional Lorentz scatterers and let your that's not an arrow that looks like this looks like an arrow like the rest but no it's not and let a charged particle move in transversal magnetic field that of course we all know that it will move in on circles of fixed radius and scatter and here is a trajectory you can see of course the scattering is random the scattering is according to then it's all deterministic you may have closed trajectories you may have much more complicated closed trajectories and you may have non closed so open infinitely extended trajectories like this in assumption that the density is not too high then you will have positive probability you will have infinitely extended trajectories and what what Serge and his co-authors so that means let me name them all so Alessia Chiara and Serge approve if they make rigorous gas of Bobilev that in the Boltzman grad limit this process just as Galavot did it for the Lorenz gas so they prove that in the Boltzman grad limit this process converges to essentially Markovian flight but it's a complex Markovian flight process which I will show you in the next slide namely what you see on this side namely that the Markov process moves ok it's not Markov FZC but you can find the Markovity in it so flies flies on circles of radius one but makes after a full circle makes a random turn as they imagine that look at this one here you have the scatterer the particle comes in from somewhere comes in from here takes a scattering then goes around has the same scattering angle and after an exponential time finds a new a new scatterer now take the the scatterer radius down to zero and you see this picture so that's the process am I right? now and they prove this convergence what I claim that with this method and with this so the proof is is like like Galavotis or Schwann so it's computing transition probabilities and what I do and again what we do with Chris Lutsko it's not written up yet is that the same result but going to longer time scale we are in two dimensions so we can go essentially to one over r with logarithmic correction we can do exactly the coupling namely we can couple the physical process with this Markov process in such a way that they stay close it's a bit more work it's a two step anyway so essentially that's done that they stay close up to times one over r in this case only probabilistic argument comes in two dimensions we can't go beyond the probabilistic argument so this is what I wanted to say and what else I think I stop here yes I think I stop here thank you very much for your attention and that's it I do have one more slide actually to show you and as I'm the last speaker I think everybody will join me in thanking our there are questions all comments yes in the work of Henri Alceter there are no videos they have a coupling for which they have a problem that seems a bit similar to use which is they want to couple two processes but if the processes are equal and this is a bit similar to the situation and so they develop a notion of coupling in which you have to stay very close but never equal have you tried an approach like that no I don't know actually about the work you mentioned so I would be happy to learn about it so who were the authors you mentioned Andrea Seberla Arna Guilan it's a sticky coupling sticky coupling no I don't know about this work so I can't answer your question I don't think what coupling is interesting for me yes Jens so it's absolutely critical that you have a concern process that's critical because I translate Poisson in space into exponential in time but of course we'd like to also understand other possible point of view do you have another coupling that would do this for you I can't answer so I didn't think much about it probably it's very difficult but maybe some clever young person can find probably you need a second coupling indeed so you need somehow to map first into Poisson or to something like large scale Poisson I don't actually know because Poisson the process is what Herbert considers in his paper on large scale are Poisson I mean if you thin them then in the limit you see a Poisson but even before the limit I mean you take a transition rate which depends on x and t that's right but I think that you can reproduce the Herbert result via the Galavok approach just changing the instead of having t you have the integral of the transition rate that means that the time process because in my construction the time process was Markovian so in the time process you have to change the rate it will take longer I agree possible who knows so I think this is a non computer of course there are lots of computational elements but it tells you exactly what in my feeling that it's not the computations from which you learn but the phenomena let me say that nonlinear people try to go away from the hierarchy which is the only two loop in the case of many years ago in improvement that time to move from Galavoti picture which is very explicit to something which is hierarchical so in some sense it's strange anyway so on your first slide you had this cartoon of the proof where you were showing this integral and you were taking absolute value inside so is it possible to not take absolute value because mismatches should be mostly independent that's what I so if you were able to do first of all the coupling thing the coupling goes only up how should I put it in three dimensions if I go beyond this time scale then as I said all recollision patterns come in but they are somewhat separated so there is a chance so if I just go make this back of the envelope computation without this sweeping under the carpet all difficulties then this exactly this argument you said would give me the optimal result but of course there are the hurdles with how far you can go with the with the coupling itself so yes you are perfect so this is just estimating large large numbers rather than fluctuations you lose a lot why you said in the beginning that you can get to arbitrary powers because if I understood the computation correctly you just be in the square root of something but this is not I don't claim that I can do this I just said that I just said that this type of argument shows that if you could do much let me just go back a little I just try to find this ah here it was look if instead of estimating if instead of estimating what do I do? I estimate this thing with t times r because it's t long and I have all together t times r mismatches and each one lasts order one time now what you say is rather than that estimated by the fluctuations which would give you square root of t times r you have square root of t times r divided by square root of t you get a square root of r there which goes to zero so you have the good closeness up to any time no matter how long the time scale is but this is hand waving but in principle this shows that if you were able to sort of control the coupling and go beyond laws of law of large numbers estimate to fluctuation estimate you would get it to any power up to almost essentially almost to the Holy Grail just before the Holy Grail ok, other questions? let me say yeah maybe but shouldn't you change the diffusion coefficient if you push that is still in this regime is the same and beyond this ok, this is guess word if you wish to take it as a conjecture is the same as if you were able to prove if you were able to prove x, let me just say x t by square root of t with fixed r right conditioned on no trap yes so this is the Holy Grail converges to a normal with some variance which depends on r this is fixed r now what I think that if this were true which probably we will not see in our life then as r goes to infinity as r goes to zero of sigma r is exactly this sigma I get that's what I think and this sigma is all in all this range from which I could prove it you have some correction but corrections with r but those are small questions ok, there is no other question I can thank all the speakers of this morning session and of course the organizers and the organizers