 Okay, so the constraints, of course, in this lecture is to talk about large-scale limits of what was the title, interacting particle systems. And actually I will talk more specifically about the case of dilute gaseous. Of course I will explain what it means to be a dilute gaseous. And actually I will be even more precise. And I will look at a situation close to equilibrium. So it's not some equilibrium statistical physics because we'll have some dynamics, but still it will be close to equilibrium. Okay, so actually this is part of a very huge program that we are working on with Thierry Bodineau, Isabel Gallagher and Sergio. So this program actually is now quite, so we have been working on this program since a couple of years. So there is a lot of material and I would like to tell you a lot of things, but I will try to be a bit organized and to just focus on some men important ideas. Of course, if you have questions or if something is too fast or okay, too technical, you just should stop me and I will try to explain a bit more prism, it's better. Okay, so that's, and to start with I will try to give you just the big picture of this program. And then I will try really to insist on some ingredients of the proof. I will not explain everything, but just very important ideas. Okay, so the whole program is about the six problem of Hilbert. And this problem is about the axomatization of physics. Okay, and especially in the case of gas dynamics. So in the case of gas dynamics, essentially we have say three levels of models. Okay, so the first level of model is just the atomic level. So here that would be actually our starting point. You just have this intractive particle. Just assume that your gas is constituted of a lot of atoms and these atoms are just interacting together. And so you can just write Newton's equation for this system of particles. Okay, so here you have Newton equations. Okay, so that's the starting point and say at the very end what you would like to understand if is you can use some macroscopic model for your gas. So fluid models, of course, they are the models that you can use. For instance, if you would like to compute, I don't know, the flight of a plane or things like this. Of course, you will not compute the motion of all atoms in the air. Okay, it's just impossible. So here you would like to have fluid models. So we can think, for instance, about, say, Navier-Stokes equation for you. Okay, and the question by Hilbert was to understand this connection here. But actually it's suggested part of the solution of this problem by saying that in the case of gases, so gases say by definition, they are a system where, say, the particles have kind of weak interaction. Okay, so there are small particles and a lot of vacuum and the interactions are, say, essentially weak. And so it's suggested that maybe you can have an intermediate level of description using kinetic theory and especially the Boltzmann equation. Okay, so the question is to have this connection here. And actually, maybe it's simpler to do something like this. Okay, so now I will explain how you get this transition here. So this transition here, of course, here you have a completely deterministic system with a lot of particles. And here you see that the phase space is actually much smaller because you just would like to understand the motion of one typical particle. Okay, so here you have n particles. And here, typically what you have is one typical. Okay, so here this is just a statistical description. Okay, and that's essentially the part that I will focus on in the sequel. Okay, so here this is a statistical description. And so of course, you see that if you would like to have a statistical description which is meaningful, then you need to have n which goes to infinity. Okay, but then that's the point where we have this assumption of dilute gas. Okay, so we will see that we have to scale, of course, also the interaction potential. Okay, and so probably in this conference there will be different ways of scaling the interaction. So one which is very well known is mean field. So mean field means that you have your n particles. But then you expect the strength of the interaction to be very small, like one over m. Okay, so that's essentially the total forces of the order of one. Here is difference. What you assume is that somehow the range of the interaction potential is very small. Okay, so essentially you will not see all the particles but just your neighbors. Okay, so that's really important here. So what I will be interested in, in this low density limit, so meaning that say when you see a particle, the force is really of other one. So you will be deflected, so your velocity will really jump. Okay, but essentially you will not see a lot of particles, just because you have very short range interactions. So I will explain a bit more this low density and the precise scaling. But that's really important that this postman equation tells you something about dilute gases and not about any, so for instance for plasma, it's not the right description. Okay, and then you have this other transition here that I will not really describe in details. But I just wanted to tell you one word about say in which limits you get this transition here because it's important to understand that the result that we already know on this transition here are not enough to understand the whole picture. Okay, so what you obtain here is, so that's what is called a fast relaxation limit. Meaning that essentially you, so in this kinetic theory in the Boltzmann equation, essentially you describe two phenomena. So one is transport, okay, particles are transported. So you have transport, and you have collisions. So sometimes particles just collide and then they are deflected, okay? And then you see that this fast relaxation limit means that actually you have a lot of collisions before you are transported, okay? So that's locally you expect your gas to be at thermodynamic equilibrium, okay? So locally, the gas is at thermodynamic equilibrium. And then you see that if it's at thermodynamic equilibrium, then you can describe this gas just by a couple of macroscopic parameters, okay? So essentially the density, the bell velocity, and the temperature, okay? And then you see that you have equation for this, so this fluid equations just govern the evolution of this macroscopic parameters here. Okay, and so you see that what is important is that it's a fast relaxation limit, okay? So somehow you have that the collision process is much faster than the transport. Okay, so you would like to see a lot of collisions, okay? So that's really important, okay? So the Knudsen number which goes to zero. But the Knudsen number is essentially the ratio between the mean free path and the typical length, so the observation length. And this has to be really, really small. Okay, so you see that the problem with this is that you have to see a lot of collisions but we will see that actually this transition here in general is justified only for a few collisions, actually a fraction of the typical mean free path, mean free time, okay? So that's really a problem because you see that if you have just this result for short time, then it's just impossible to combine with results here for agrodynamic limits, okay? So that's really one point which is important. And that's why I will here describe the situation close to equilibrium. Because close to equilibrium, we will be able to justify this transition here for long times, okay? So that's, maybe I can just redo this picture close to equilibrium. Because it will be actually much better, and I will tell you exactly what say which models I will consider for each levels of description, okay? But we will see that actually this picture is completely justified in this case close to equilibrium, okay? So that's now the same thing but close to equilibrium, okay? So let me start with the atomic model. So here I will really attack the simplest possible model. Just assuming that I have a gas of odd spheres, meaning that your particle are just spheres. They are transported with rectilinear uniform motion until they collide. And then you have what is called a specular reflection, elastic reflection. Meaning that when two particles collide, post the energy and the momentum are conserved, okay? So here I will start with odd spheres, elastic collision. Recording in progress. Okay, so elastic collision once again is that you have that if two particles say with velocity v and v1 collide, then you should have this two relation here, okay? And then you see that essentially you have only one possible collision law, okay? If you have two particles like this colliding, okay? Imagine that they arrive with velocity v and v1 here, okay? Then you have that, so the vector n is just x minus x1 divided by the modulus of x minus x1, okay? And then you have the law that v prime would be v minus v minus v1 dot n times n, v prime 1 is equal to v1 plus v minus v1 dot n, okay? And n is just x minus x1 divided by x minus, okay? So that's the collision law and say, apart from the moment where they collide, you just have that the velocity is constant and x is just dx divided by dt equal to v, okay? That's, okay? So I have this big system of odd spheres with this elastic collision. And of course, the number of particles here will go to infinity. So I have this number of particles and also I don't know very precisely the distribution of, so I don't know exactly and I'm not interested in the precise localization or precise configuration of all these particles. But what I know at time zero is just a kind of probability density on this big phase space, okay? So essentially here, you have some randomness at time zero on the initial data. Okay, because essentially what you say is that if you observe the gas, what you can really measure, something like a density for a typical particle, but you cannot really measure or observe the configuration of all particles or the joint probability of all particles, okay? So essentially you will choose the initial data here to be compatible with this observable, which is just the density of one particle, okay? So essentially what you will assume here is that the particles are IID at time zero. Of course, it's not completely true that they are IID, they cannot be completely independent just because they cannot overlap, okay? So essentially you have, so independent is not correct. It's actually independent up to the exclusion, okay? And identically distributed particles. Does it start with the Gibbs distribution? So I will just explain right now, yeah. So in general there is no reason. So if I just would like to answer this problem, there is no reason to start with the Gibbs measure. I can take any distribution for the typical distribution of one particle. I can take any distribution. But here I will, since I want to be at equilibrium, I will look at this Gibbs measure, okay? Okay, so here that's exactly what we do. And actually there is still one thing about the randomness and the initial data. So I will first have this randomness on the distribution. So in x and v, but I will also have some randomness in the number of particles. Actually, what I know is not the exact number of particles, but just the average number of particles. And I will have something like a quasi Poisson law, okay? Which is called actually a grand canonical setting. So mu, which is the average number of particles per unit of volume, okay? So that's number of particles per unit of volume. So this mu is fixed, but the number of particles is not fixed, okay? And so essentially I will start with a Gibbs measure, which has this form. So here I have the partition function, which is just a normalization, okay? Then I have mu to the n divided by factorial n. The mu actually will depend on epsilon, okay? And then I take the Gibbs measure. So the Gibbs measure is just exponential minus the energy. So the energy of the system is just the sum of all velocities to the square, by two, okay, that's the case with temperature equal to one. Of course, I can just change the temperature here. And then I have, of course, you see that, so this would be exactly the case of independent initial data. Because here, this is just a product here. But of course, I cannot take this product because of the exclusion. So I have to multiply this by the indicator function here of this domain. And this domain is just the fact that x i minus x j has to be bigger than epsilon. Okay, so that's the initial measure. So I will just explain. So n is a random variable here. It's the number of particles, okay? And I will take all possibilities, so my phase space is actually. And the mu epsilon is this average number of particles. And now you see that I have to look this system in this low density limit. And so mu epsilon will be actually related to the size epsilon of the particles here, okay? And so the size of the particles. So here I assume that the size of this diameter is epsilon. Okay, and so what I will look at is the transition. So that's the low density limit. So mu epsilon goes to infinity, epsilon goes to zero. And I have that mu epsilon times epsilon to d minus one. So it's equal to some alpha and alpha will be the inverse mean free time. So say for the moment, alpha is fixed. Yeah, then it's Lebesgue measure on x. So I assume that I'm a torus in x, okay? So that's really a probability measure. Everything is integrable, okay? And then in v, that's this case measure times the Lebesgue measure. Okay, and so this, the fact that this is just a mean free time. Okay, if you just a picture like this, you see that each particle here during a time one, will actually describe a volume which is a cylinder. And so you see that the section of this cylinder is like epsilon to d minus one. The length here is the typical velocity times the time, okay? And you like that typically you have one collision per unit of time. So if you have new epsilon particles, you see that this will give you exactly this scaling relation here, okay? So that's the first level here, okay? And so the second level now would be, so you see that it should be something like the both one equation. It should be an equation at the kinetic level. But actually I'm close to equilibrium. So if I just look at the empirical measure, okay? So now the object that I would like to look at is the empirical measure. I will call it pi, let's say epsilon here, okay? So that's just one divided by new epsilon. And you have the sum, i equal one to n. And the direct measure at xi, zi, okay? So that's a measure which depend on t and x. So I take all particles, I put a direct measure in each time I have a particle, okay? So this, so if you are in the canonical setting, so just fixing the number of particles, you see that's just one over n times this sum here, so this is just the average of all these direct masses. And so you see that here you have a first actually averaging, which is just averaging over the different levels of the particles. And now in a grand canonical setting, you have to normalize this thing by one of a new epsilon, but then you still take the sum of all the direct masses, okay? So if you look at this, and you believe that this scheme here is correct. Then you see that what you expect is that, and actually what Lanford theorem tells you is that this empirical measure will concentrate when in this low density limit to the solution of the Boltzmann equation, okay? But that's not really interesting because in this case, if you start with this initial data, then the Boltzmann equation as a constant solution, a stationary solution, which is just a Maxwellian, okay? So that's, okay, but, okay? So that's actually, it's a bit, okay? Actually it's, as this is an invariant measure and there is a dynamics, you don't really need Lanford theorem to tell you anything about the dynamics of this thing. It's actually just physical physics at equilibrium. And you can see that this empirical measure so concentrates on the solution to the Boltzmann equation, which is just this stationary solution, which is just exponential minus v square divided by two, and then you have two, okay, you have some, okay? So that's not really interesting. Of course, this is a result which is true for all times and then you see that, okay, you start from this, this, this solution here and then fluid model is just that, the solution zero, okay? Which is, okay? It's true, okay, but it's not very interesting. Okay, so here we are not interested, so close to equilibrium, we are not really interested in this. So that's the low of large numbers, okay? That's the low of large numbers. Okay, so now we are interested in the next correction, which would be the equivalent of the central limit theorem, okay? And so you are not interested in this empirical measure, but in the fluctuation of this empirical measure, okay? So now we are interested in, and so now this is almost the same, okay? So zeta epsilon, so that's p epsilon t is this measure here. And now z epsilon t is just, so you look at the fluctuation. So you take this empirical measure, you remove the expectation. I'll just try it by epsilon because it's a bit too long. Of course, if you look at this, it will just converge to zero, and then you have to rescale by square root of mu, okay? So that part is essentially the low of large number, but everything is at equilibrium. And now of course this, at the level of the fluctuation field, you will see something, okay? So the question that I'm asking here is, is it possible to prove a central limit theorem, okay? And then, so first of all, what can you expect in this limit? So you would like to say that if this picture here is very robust, then you expect that this fluctuation field should satisfy an equation which looks like the linearization of this post-man equation, okay? So that's natural gaze for this fluctuation field. But then you see that there is, you are at equilibrium. So if you are a physicist, then what you expect that because, so this linearized post-man equation is an equation which like the post-man equation is dissipative in the sense that you have an entropy which is dissipated, okay? So which is irreversible. Now if you are a physicist, you expect that since you are equilibrium, then you should have a kind of fluctuation mechanism which compensates this dissipation, okay? So what I say is that if I know nothing about that, okay? But I just believe that this is the correct picture. Then at the level of the kinetics, or at the level of fluctuation, what I should see close to equilibrium is a kind of equation, a process, actually which has to satisfy an equation with two terms. One which is dissipation, which is related to this linearized post-man equation. And another term, which is some noise, which actually describes all the fluctuation compensating the dissipation, okay? And that's actually what you can prove. That's the first CRM. So that's with all these guys that I mentioned at the beginning. So with Thierry, Isabelle and Sergio, we prove this CRM that actually there was previous result that I will try to mention just after that. Okay, so what you can prove is that in the low density limit, and with this initial data, blah, blah, blah, blah, okay? What you expect is that the fluctuation field covers in low to the solution of the fluctuating post-man equation that I will write now, okay? So you have something like d zeta, t, which is equal to l zeta t. So this is the linearized post-man operator that I will define right now. Plus, plus a noise here, which is the Gaussian noise. Okay, and we can actually, we can compute the covariance. So it's, what is important here is that it's, that's actually related in t and x. Okay, and what is really nice in this regime is that you can prove that this convergence, all for all kinetic times, and even for slightly diverging kinetic times, so that you will be able to have this alpha here, which goes to plus infinity, okay? So of course, it has to go to plus infinity very, very slowly. But still, you will be able to catch some regimes where you have fast relaxation, and then diffusive regime, and you can actually get the Stokes equation and the Fourier equation for the temperature, okay? So, and this, so for all kinetic times, even slowly diverging time, okay? So that's alpha, which goes to infinity like things like, something like three log of new epsilon, okay? Should be like log, log, log of new epsilon. So what is the time in the statement? So here, when I say that it can be so either diverging times or, and that's the same because you will see a lot of collision that this parameter alpha here, which is the inverse of the mean free time, goes to infinity, okay? Essentially, what I say is that this parameter here measures the typical number of collision that you have per particle, okay? So if it goes to infinity, then it's essentially that's another way of counting time, okay? So kinetic time is when this guy is one, okay? So of course, for the moment, it's a very vague statement because I didn't define this and this and, okay. But I would like, before I go into something more precise, I would like to make some comments, okay? So the first comment is about this noise here, okay? So, so maybe I can, there is no effect, okay? Oh, there are actually three, so. So the first remark is about this noise, okay? So you see that when you start here from this, this odd here dynamics, you see that everything is deterministic, okay? If you start from one configuration, then, okay. So you have to exclude some pathological configuration for which the dynamics is not defined. But apart from this, you can define just one dynamics for all times, okay? And now you see that it's really different because at this level here, you have a noise and this is a dynamical noise. It's just not just a noise on the initial data, okay? Here, actually, you don't have really a noise. Actually, you have a bit of noise on the initial data. But say in the scaling, essentially, you don't have any noise on the initial data, okay? So the noise on the initial data will be vanishing, okay? If you start from this, this, this gives measure here, you will see that essentially at time zero, you have a bit of noise. But with a scaling, which makes it zero in the limit, okay? So you don't have any noise at time zero. But then you see that essentially this, this microscopic noise on the initial data will be transferred as a dynamical noise on the, on the, on this fluctuation field. So that's really important because there's kind of magic, okay? So essentially the randomness on the initial data is transferred as a dynamical noise, okay? So that's actually very subtle and, and so now we understand a bit better, but it's, so the geometry of this, say, the way it's really transferred is something which is incredibly complicated because essentially it, it's encoded in very, very small structure of the initial data, okay? So that's not something that you are really able to track, okay? So that's related to a very complicated geometric structure in the initial data. Say, not really in the initial data, but really in the phase space. So it's something which is, okay? So, and this to me seems to be related to what people call spontaneous stochasticity, okay? Somehow what you have is something, a system which is well defined, deterministic at, at, say, for fixed epsilon here, but you see that actually two trajectories will actually diverge very fast, okay? If you just change a little bit the position of one particle, you see that you can create or just make a collision disappear, and then the whole thing will be completely different, okay? So it's not really instability because for fixed epsilon everything is stable, it's continuous, but you see that, say, in the limit when epsilon goes to zero, you have a lot of instability at the microscopic level. And because of this instability, essentially you generate this dynamical noise here. So that's probably related to spontaneous stochasticity, okay? And really what is important here, and this is something that you don't see if you are looking at the mean field limit. In the mean field limit, everything is much more stable, okay? You see that if you are in the mean field limit, you just change a bit the position of one particle, actually you don't care. Because, say, it contributes to the force field like 1 over n, okay? So that's not really a problem. Now if you change a little bit the position of one particle in this low density regime, then you will change completely the dynamics forever, okay? So that's really something which is really important here. That's related to the microscopic. So instability is maybe not the right word because everything is continuous once again. But, say, it's not uniformly continuous, okay? It's continuous. So if you look at the divergence of trajectories, it's actually controlled. But with a constant which is like 1 over epsilon. So when epsilon goes to zero, you see that you have no control. So you have this instability here and then you can generate noise, okay? So that's, I think it's something which is really important, okay? And, okay, maybe at this stage I should write this operator here, okay? So the other remark was that one can reach very long times, okay? So maybe I should stay at this stage what was known on this limit here. So actually, it was known, seen the work by, I think, probably Schpawn was the first one to write this. So part of this theorem actually was able to prove that the covariance of this field here converged to the solution of this equation. So without the noise, of course. Because you just have the covariance for short times, okay? So actually, there are, so first results. You just have the convergence of the covariance for short times, okay? Then in dimension two, we had very specific results, which was for long times with Thierry and Isabelle. But it was really specific to the dimension two and it was only for the convergence of the covariance. So first results for long time, only in dimension two. It was, so essentially this two proof. So this one was a bit different, but they were really using a land force strategy. And so you see that, so this one is really land force strategy. Here, there is a bit of something in time, so I will comment on this later on. And now, you see that this result is much better because you have long times and you also have all the moments, so you prove that the deta, the limiting deta is Gaussian, okay? So now we have the, so Gaussian t plus for long times, okay? So that's, so this, this thing are really recent. Okay, but what is really important here is that we can reach these three long times, okay? And that's what would actually allow to look at this limit as well, okay? Maybe I will write things later on, okay? I will define all these things later on. I just want to describe a bit the last, the last step here, okay? And then I will go to more precise things. Okay, there is something to, okay? So maybe I can just remove this. Actually, the third one, that's okay. So I just would like to tell you how you get this, this one, but then I will not talk about that later on because I have to choose things, okay? So the other thing is not how I would like to look at this limit when alpha goes to infinity, okay? So case of where you have, say, during each unit of time, you have a lot of collisions, okay? So one thing which is very well known and since actually long is the case where you just remove this noise here. You start from the linearized Boltzmann equation, okay? And I just write like this, plus V grad X t for alpha times Lg. So actually the previous thing is not, okay? So that's the Boltzmann equation that you get for the covariance, for instance, if you start from this atomic model here with odd spheres in this low density limit, assuming that mu epsilon times epsilon to d minus 1 is equal to alpha, okay? So we get this one, okay? Okay, and L now I can maybe write it here. So L of g is something which is a bit complicated, but, okay. So I will explain, so that's m of v1, and then you have g of v prime 1 plus g of v prime minus g of v minus g of v1. And then you have v minus v1. Omega to omega dv1, okay? So that's the linearized Boltzmann operator. So I will try to explain. So the Boltzmann operator in general, okay, is this q of ff is almost the same. So it's f of v prime 1, f of v1 prime minus f of v, f of v1. So that's the non-linearized Boltzmann operator for odd spheres. And I will try to explain all the terms, okay? So you see that it's a bit complicated, and actually this operator here express kind of jump process in the velocity space, okay? So you see that it tells you that how you will modify the distribution due to these collisions, okay? So here, this is what is called the gain part. And you see that you will create somehow new particles of velocity v. Just if you have two particles of velocity v prime and v prime 1, which collide. So you have your v prime and your v prime 1 here. And when you have a collision of these two particles because of the collision law, which ensures that both the energy and the momentum are conserved, you can construct a pair like this of v and v1, okay? So the only possibility to create a new particle of velocity v is to have a collision between two velocities like this, okay? And here you see that omega is the deflection parameter. And now at this level of kinetic theory, it's not something that you can measure as at the microscopic level. It's just a random parameter, okay? So omega is a deflection parameter, it's random, okay? And so this tells you what is the probability that by collision of two particles with velocity v prime and v prime 1, you will create a new particle of velocity v, okay? So that's just this jump, okay? So that's the gain term. Now this is the loss term, okay? You say that, okay, maybe a particle of velocity v with Eucalype with another particle with velocity v1, and then that will be deflected, okay? That will jump actually the other way, and you will have less particles of velocity v, okay? So that's really what described this jump process. And you see that this is what is called the collision cross-section. And it just gives you the rate of this jump process. You see that here, you already see a bit of this stochasticity, even at the level of the Boltzmann equation, because now omega is really random, okay? So now it's the expectation, so you don't see a noise here. But still you see that at the level of the law of large number, you already see a bit of randomness at the dynamical level, because now this deflection parameter is random, okay? So that's the usual Boltzmann operator for the collision operator. And now you see that if you look at f of v, which is just m of v times a small delta fluctuation. Then you see that the linearization of this operator will give you this operator here. Okay, so that's just the linearization of this guy here. So that's the linearized operator. And actually here when I write this l, l is just minus v grad x plus l, okay? Sure. So but then when you do the linearization, the alpha, how does it appear? Normally the alpha is just, if you write with the mean field, it's always this. Okay, the transport is not scaled by this transport, it's just transport. It's not scaled with this low density parameter. So only this one is scaled. You see that actually here you see that this quantity here is quadratic. And so you see that you really have this concept, okay? So that's why you get this density parameter in this only in this term, okay? Okay, so now if you look at the limit when alpha goes to infinity in this equation, okay? So I can tell you rapidly what happens. You see that, so if you can prove some pre-rebound for your g here, you see that of course this term will have this penalization, okay? And so you expect g to be in the kernel of l, okay? So when alpha goes to infinity, you expect that say the limit. So here I will call this g alpha, okay? So g alpha will converge to some g. So we assume that we have something like this. Then you have that l of g as equal to zero. And this tells you that g is actually a combination of collision variance. So g will be something like, so g of txv will be the quantity rho of tx plus u of tx.v, sorry, plus theta of tx times z squared minus d divided by 2, okay? And so that's the linearization of the density, the linearization of the burnt velocity, and the linearization of the temperature, okay? So you see that if you are in a limit, in a fast relaxation limit, so that this collision process is much faster than the transport. And you expect that you are indeed close to equilibrium. And here for this fluctuation, this means that the g has to be of this form, okay? So now you see that actually there are two kinds of limits. So either you are really interested in this limit here. And then what you will see is actually the acoustic system for rho u and theta, okay? So in this regime, what you see is that rho u theta is phi, the acoustic equations. So here you see that you have no dissipation. Even though this equation is a bit dissipative, then in the limit, you will not see any dissipation at the microscopic level. But now if you rescale time and just look at parabolic scaling. So you just look at this scaling, okay? So in this second, so in this new scaling, so in the parabolic scaling, what you obtain is that actually you will filter the acoustic wave, so you have this very fast oscillations, okay? But then you will see just the non-oscillating part, which is the incompressible part, okay? Which is in the kernel of the acoustic operator. So you get that the divergence of u has to be equal to zero. And the Boussinesq relation, that rho plus theta has to be a constant. Actually, it has to be zero, I think, because you are on the torus. And then you can prove that u and theta, okay? They satisfy the Fourier and Stokes equation, okay? Okay, and then, so here you see that in this, say, hyperbolic regime here you have no dissipation. And so you don't expect that if you add the noise here, you will, actually the noise will not be seen at the level of this, this, this acoustic approximation. But now if you look at this very, very long time, so the diffusive time, if you think about problem motion, you see that you will see the diffusive limits by just rescaling time, just like here, okay? And then you see that at this level here, you have the Fourier equation. So with the Laplacian, okay? So here, the equation are dissipative. And then if you add some noise here by the same fluctuation dissipation theorem, you expect that this Fourier and Stokes equation have to also be perturbed by a noise, which exactly compensates the dissipation, okay? So now if you add the noise here, then you get here the flutating Stokes Fourier equation, and actually you can write a new theorem here, okay? So now if you look at this, so at this limit, so in the limit where mu epsilon goes to infinity, but you have that mu epsilon, epsilon d minus 1 for alpha also goes to infinity, with alpha epsilon, which is much smaller than log, log, log mu epsilon. And now if you look at this fluctuation field, so you really start from the particles, okay? And you look at the fluctuation field, so you just rescale time. I think this is, so alpha is large, okay? That's this, then this guy, okay? We'll converge to, actually, so of course you cannot really take this guy, because this guy has a lot of components. It's something which, so by duality. So it is defined on function of t, x and v, but now you are not interested in this. So you have to project somehow this fluctuation field. So you have to project it on hydrodynamic fields. So we'll converge to the solution of the fluctuating, that is the Stokes Fourier equation, okay? Dt u equal laplacian mu plus, okay, the noise is a bit term. Because you have to use the Lorentz projection, okay? And then you have to take the diver, okay, the gradient. Okay, I will not write exactly what this guy is, okay? Something like this, so du, laplacian mu dg plus something like this. And the theta is, so viscosity here, okay? And this, what is important here is that the constraint is preserved. Yeah, that's like a gradient of a white noise, okay? I will not detail this part, but this is just to tell you that actually in this, so close to equilibrium, you have really this rule picture, okay? So you can really go from particles here to the fluid equations. And even taking into account all these fluctuations. Okay, so that's the first thing that I wanted to say, okay, I'm already late. But what I would like to do now is to explain the main ingredients for this, essentially to prove this CRM here, so this central limit CRM, okay? So of course, part of these ingredients are known since the work of Landford. But I will still spend a bit of time to explain them, because it's really the very basic argument. And then I will try to focus on some very key arguments, okay? So I would just give you right now the list of these arguments. And then I will spend the three next hours, so just one now, but two on, I don't know, Wednesday or something like this, to explain these important new things, okay? Yeah, sure. And so this projected on hyper-dynamic fields, I think of an expansion and taking a few terms, or? Naturally, so here what we do, actually all this, so this field actually is defined essentially by duality. So what you know is it's action on observable, so on test function. And now we say that instead of looking at any test function, we just look at test function which are of the form. So typically a phi of x times v square minus d plus 2 divided by 2. And this will give you this theta here, okay? And for you, it's just you look at test function which are of the form. Function which is diversion free dot v, okay? So instead of looking at all possible test function, you just restrict your attention to some very specific test function for which essentially you prescribe dependence with respect to g, okay? But that's really a projection by duality. Okay, I have a very nice question. So at the level of atomic level, it's clear to me what means the invariant in the Ditz measure. Yeah. But here I still have on the fluid level also we have some invariant measure on the fluid. So what does it mean, the level of fluids? So you would like to construct the invariant measure for this guy here? You know how to do that. I don't know. What is the meaning at the level of if you have invariant measure for you? What is the meaning that the velocity is? I mean for the fluid, what does it mean that if the initial velocities are distributed according to this measure, then later they stay. Actually this means that say somehow this laplacian here is strange because this laplacian here somehow what you have to understand is that the energy at this level is the counterpart of the entropy at the level of the Boltzmann equation. Okay, so that's really, you really have to understand the energy as kind of information about the system. And what I say is that this part of the equation lost information just because that's like the problem motion. You lost information looking at this average. And what I say is that this part here come from the fluctuation of all possible trajectories and actually it's not true that so essentially you recover, you just recover kind of stochastic reversibility with this guy. But I'm not able to construct the invariant measure which is underlying the system because I have all this projection and I don't know how to project and to do all this kind of things. I don't know if this answers your question. Okay, so now we have the list of tools. So I think the first part was the sixth problem of Hilbert and the second part was the description of all these results. And now we have the mathematical tools. Okay, and as I told you I will not describe all things because actually it's really a nightmare to prove this thing. But I would like to focus on some things which are I think rather intuitive. And so we can, I think you can have a good idea of what that at least we have the idea of the global strategy even though you probably after four hours you will not be able to write all the details. Okay, so as I told you the first ingredient is what is present in Landford proof and actually I will spend the next hour explaining a bit this proof. Okay, so actually in this proof by Landford there are really two parts. Okay, so one part is to, so the original idea is to write actually a kind of solution, actually the exact solution of, so the exact correlation function using series expansion. Okay, so essentially, but I think it's something which is really classical in all perturbation methods. So essentially you say that you don't really understand what happens but you will essentially write a Duhamel formula and then iterate the Duhamel formula. Okay, so that's first important ingredient with this series expansion. So that you obtain by Duhamel iteration. And so that's very classical in all these kind of problems. Actually it doesn't work very well for the case of the mean field. At least not in all cases for the mean field but here it's rather good. But of course you see that as soon as you write this kind of iteration and with that you write solution as some formal series you see that you have a problem of convergence of the series and that's why essentially you can justify the Boltzmann equation only for very short times. Okay, so the fact that essentially the result is true only for a very short time is just due to this method. Okay, so this is responsible for the short time. Because essentially the first term is like t, the second term is like t square. And t to the cube and then you see that this kind of series is a question that's not too bad to converge for t which is small but not for t large. Okay, so that's very classical. And then there is a second argument which maybe is not written exactly like this in land force proof but I think it's really important is that it's a geometric argument, okay? So you would like to represent all this iteration of Duhamel by trajectories which are not trajectories of the system but, sorry, the trajectories, okay? So that's really important. So it's a geometric representation of the dynamic. And actually it's not of the real dynamics but of the dynamics projected on some finite dimensional space, okay? So projected finite dimensional space is, okay? And so here we'll have the notion of pseudo trajectories and I will spend a bit of time explaining these pseudo trajectories. But then we will see that what is important here, so what you have to remember for the rest of this lectures is that here you will see that the kind of convergence, say the notion of conversion that you have, once again it's very different from mean field. So in the case of mean field what you obtain is that essentially all trajectories will be close to each other, okay? You can modify a bit the position of one particle, etc. But it's very stable in the sense that all trajectories are close to each other. Here is different, so in general the trajectories are close to each other. But sometimes, okay, trajectories may diverge very rapidly, okay? So that's what we will call recollision. So sometimes it may happen that trajectories are quite far from each other. And so you see that the notion of convergence is really different because you don't expect the empirical measure to converge for all initial data. What you expect is that it will converge in probability. So with almost probability one, all the trajectories will have a nice behavior. But say for this bad set of initial data, then you have something else, okay? So that's really different from mean field. And you can really see this at the level of trajectories. So you see that here, so that's important for the notion of convergence. Say the empirical measure is well-behaved for almost all initial data. Actually it's not almost all, say for a set of initial data. Of probability converging to one. And actually it's very important because then you will see that all the fluctuation, everything else will be encoded in this bad set, which is the complements of this set here, okay? So that's really important because the fluctuations are encoded in the small complement. And you see that it's kind of consistent with the fact that I told you that in the case of mean field, essentially the small complement is just empty. And then you don't have any fluctuation, okay? That's really different. Okay, so even though you can have the impression that, okay, that's more or less the same, you will see that actually it's very different. Maybe I should also say that say in the mean field you will retrieve this kind of behavior if you just look at the next order correction, okay? So at the main order, you don't see this kind of things. But if you go to the next order correction then you see something like this with dissipation and fluctuation and all this kind of behavior. Okay, so that's the first, really the starting point is this strategy by Landford. Okay, then there is another second argument. Okay, which is, which I will try to explain next time. Which is the time sampling, okay? So actually it's related to this thing here. So what I say is that, okay, maybe say in general everything is nice and I should be close to the Boltzmann dynamics. And then it's perfect. I'm happy with that, okay? But now you see that there can be a pathological behavior and the first way you see this kind of pathological behavior is that you can have a lot of collisions and then that's responsible for this short time limitation, okay? So that's the first thing that you would like to avoid. And then the other thing is that you have this strange recollision or say strange pathological trajectories. And you would like also to avoid this kind of things as you know that you will not have any convergence, okay? So essentially you have two obstruction to have a long time convergence in Landford proof. The first one is that you can have a lot of collisions and then you have very high iterations of the dual formula and then it's very bad because, okay, these terms can be very large. And the other thing is that maybe you have this kind of bad behavior here. And then it's just wrong that you have the convergence, okay? So the idea is that if you like to somehow improve Landford's proof, then you have to avoid this kind of behaviors, okay? So now what we will do is just to say, okay, I would like to have the convergence on a very big time here. Say zero, here I have theta, and I would have to have, and say Landford tells me that I have convergence on this very small interval here. So, okay, now what I will do is say Landford is just Landford time, okay? But somehow if you go until Landford time, it's too late, okay? It's already, so you cannot win with this strategy here. So what we will be doing is just to somehow restrict time to have very, very small time intervals here. So this very small time interval, they are, say, not exactly microscopic times. Microscopic times would be the time which is just invariant by the dynamics. So if you take time of the order of epsilon as the size of the particles is epsilon, you will see just always the same by dilation, okay? So you will take a time delta here which is a bit bigger than epsilon, but say of the same order as epsilon, okay? And then each time you will reach this time, this small time interval, you will just check whether you have a pathological thing, okay? Then if you have a pathological thing, then you stop, and you just iterate when everything is nice, okay? So look at very small time intervals and iterate only if everything is nice, which is a very rigorous mathematical segment. I think with this, you can do the proof by yourself, okay? So actually, you see that you have two types of pathological behaviors. So one is the big, big number of collisions, okay? And the other one is this what we will call recollisions, okay? So essentially, what we will do is that at each time delta here, we remove trajectories with recollision. And we will see that in terms of graphs, because everything here will be very, you have a lot of combinatorics here. And in terms of graphs, it tells you that you have something which is non-minimally connected, so you have loops and things like this. So with, I would call this loops and cycles, and I will explain this. So at each time r delta, okay? And say, actually you need to double something at a time which is a bit bigger, which is tau, which is small, really small compared to one. So this is just a bit bigger than epsilon, and you have tau, which is still bigger, and then you have one, okay? And you remove trajectories with too many collisions. It's actually super exponential collision process, so close or collisions, okay? At each time tau, so I will come back on this. But you see that's really the idea is that with this something, you just say, remove, discard all possible things which are really bad. And that somehow prevent the convergence here, okay? So, and this actually, this argument was already present in the case of quantum limits. Saying that's really something which is important in the paper by Adoshan Yao. And actually it was also present in our first paper in this, for a long time, in the case of just one particle going to the volume motion, okay? So it was a paper with Thierry and Isabella, okay? But that's really one important thing. But of course, you cannot use this thing without saying what you will do with the remanders, okay? Because here you generate a lot of remanders. Each time that you stop the iteration, you see you stop in the middle of nowhere. You can stop here, okay? Then what do you do with this remander, okay? And so, I think one really nice new argument that we used to prove this theorem is, and that's the reason why we really use the fact to be close to equilibrium, is that we can really use the invert measure to discard all these pads. So you see that with this sampling what we are able to do is to localize the problem, okay? So until here, since you don't stop the iteration, until here, everything is nice, and now you do know that the bad thing happens here, okay? So here with this sampling, what you are able to do is to localize the bad behavior in time, okay? And so now the next argument, which is really important and which really used the invert measure, is kind of weak convergence method. So you see that it's really different from Landford. Landford is really a strong convergence method. You write a series expansion for your correlation function, okay? So everything is, this is an exact formula. And then you prove that each term will converge to something. And so in the end, what you have is really a strong convergence of a norm, okay? You can control the remander between the asymptotic and the original correlation function, so it's really, so here it's different. We will really use the fact that we compute expectation under the invert measure, okay? So here we look at moments, so moments meaning that you have expectation of some observables under the invert measure. And now the very nice thing is that under the invert measure, you can say that this guy, so only the problem here, will happen with essentially zero probability, at least with negligible probability, okay? So what you are able to do is to localize the bad behavior. And then with the invert measure, what you will be able to do with these moments, is actually to decouple all times, okay? So what happens here at time zero, what happens here at time theta, I don't know. And what happens here? And just because of this contribution here, it will be negligible, okay? So what is really important is that remainders can be estimated using time decoupling, okay? So of course I will essentially spend one hour on these two things. Of course they go together, but I think it's important for you to have already this intuition that what is really different is that you don't try to have an exact representation of the correlation function at this stage here. You just try to have, say, the expectation of some moments at different times, okay? So you take just the fluctuation field at different times that you test against test function, so you have these observables here. And then you say, okay, but here there is something that essentially never happens. And so that's, I can just forget, say, the contribution to the expectation will be essentially zero. So I hope to use it, but I'm very optimistic, you know? So what I hope is that actually we can do things like this also far from equilibrium using the entropy, okay? So here you see that essentially what we are using are moments. So because what is good is that when you are close to equilibrium and you look at the fluctuation field, essentially what you can control is any Lp, so any moment of order p of the fluctuation field, okay? Just by altering the quality and then it's fine, okay? And so my hope, but maybe I'm the only one. I think that it's possible, is that essentially we should be able to do the same thing using the fact that actually when you start even in this case, where you're far from equilibrium, then you see that, of course, you control. So usually in non-fault proof, you use some weighted L infinity estimate on the, and this cannot be propagated for further time. But one thing which is propagated is the entropy. Okay, the entropy is actually at the level of the microscopic system. It's something which is just preserved. It's a conserved quantity. And my hope is that somehow it tells you that you are not so far from, so in some sense you are still close to equilibrium. And then if you have something which is really, really pathological, this cannot happen. But okay, that's like science fiction, okay? So that's, and you have to be very, very optimistic to think that such a thing can be true, but okay, it's important to have a, to dream a little bit, okay? And then the last argument that I would like to show you, and I don't know whether I would have a lot of time to do this kind of thing. But I think it's really something which is nice and actually which gives you a very complete statistical picture of all these things here, okay? Which is the question of dynamical clusters, okay? And okay, so this dynamical clusters is a way, a systematic way of classifying all correction to, so you see that in this, this, this Boltzmann approximation. So even in the general case here, what you expect is that, so this Boltzmann equation is the law of large number, essentially. And you see that what is important is that, say, this equation, what it tells you is that the probability of having a collision between two particles, so normally of course it depends on the probability, John probability of having two particles with velocity v prime and v prime one. But now it's like the product, okay? So the Boltzmann approximation is to tell you that essentially everything is chaotic in the sense of statistical physics, meaning that all particles will remain independent forever, okay? So that's really what is important is the Boltzmann approximation, okay? What is actually in Boltzmann theory is really an assumption, is that you have this independence, okay, which is also called chaos, okay? And now what I say is that with dynamical cluster, actually we will be able to classify all deviations respect to this chaos, okay? So this is, this gives you a systematic way of describing and classifying the deviations from chaos. So here in the case of where you're close to equilibrium because you expect, essentially what you expect essentially is that in the end, we'll end up with something which is Gaussian. And so of course you are interested only in, say, in the just correlation with two points, okay? So because else all the other terms will be really, really small correction. And actually in the limit, they will completely disappear, okay? So in this case, this is, you don't really need completely this complete classification here of what we call so-cubulants. But actually it's a very, very powerful method. And it tells you that in this general case, far from equilibrium, you can, so you have this low of large number. Then you have the central limit CRM to describe the small fluctuations. And then you can actually even reach a large deviation principle, okay? So if you really use this, this, this powerful method here, with the classification of all possible clusters of all orders, then you end up with a large deviation. So this is just a remark. This actually provides a large deviation result. So here to prove this CRM here on the central limit CRM, we will not really use this, all this classification. But still we will need a bit of this. When you are interested not only in the covariance, but in other other moments. So if you are really interested in proving that asymptotically, you have a Gaussian process, so that you have Vick's rule. Then you need at some point to really use this, this dynamical clusters, okay? So here it will be useful to prove Vick's rule. So you can say about Vick's rule, you don't need to look at correlation of all those three and four. But actually here, some intermediate timescale, you really need to have a more precise description in order to be able to iterate the whole thing, okay? So we need precise description at intermediate timescale. So typically the intermediate timescale is the timescale tau here to be able to do the whole iteration. So that will be the program for the last hour. And I'm not sure that I will have a lot of time to do it because I'm already quite late. So that's, okay, we'll see. Okay, so that's really all the ingredients that somehow you need to use to prove this theorem. And then you need a lot of technical things. And so the technical things are a lot of combinatorics, okay? A lot of geometry and this bad geometry, like 3D geometry where you have to compute sets and intersection of cylinders and things like this. So it's not really, and but I promise that I will not tell you anything about that. So of course I'm just putting everything under the carpet, but okay. So now in the remaining time, what I would like to do is really to come back on this Landford thing, because I think it's really important for you to have this basic elements to understand the rest of the proof, okay? So I'm sorry for those of you who already know all this Landford thing very well. Okay, you will lose an hour or you can maybe just go outside right now. But I really need to take a bit of time to explain this. Okay, so now it was supposed to be the beginning of the second hour. You see that's just half an hour, okay? So I have already erased what was written about Landford proof. But so if you have to remember one idea is really that what you are doing in Landford proof is really to project the dynamics on the finite dimensional space, okay? And that's the tool to do that is the correlation function, okay? So you project, essentially you have your distribution. You see that your initial measure live in a space which is infinite dimensional because you do not fix in advance the number of particles. So it can be as large as you want, okay? But then you project everything. So you project this measure on finite dimensional space, okay? So that's what you do when you introduce the correlation function, okay? So essentially the correlation function what it does for you is to compute the expectation of 1 over mu epsilon to the p times the sum for some indices which are all different. So they are different, okay? I will not write this anymore of a function, say capital H which has p indices. And then you take c1 t, cip of t, okay? So that's the expectation under the initial measure, okay? And so now what you are doing is so you have your initial measure. Then you pick p points, okay? And of course you take any p points among all the particles. And then you look at the trajectories of this p point and then you look at the configuration of this p point at time t, okay? And so of course this is a function which depends on t now, okay? You start with all your points and then you pick p points at time t and you look at, you test this with a function h here which is very smooth, very integrable, so a very nice function, okay? And then this will be, by definition, it will be the integral of w p which depends on z and zp times hp of zp e zp, okay? Well, capital Zp is just z1, okay? So that tells you what is the distribution at time t of this p variables, okay? So here you see that you have really this projection, okay? Because now I just look at observables which depend on the finite number of variables. No, maybe f would be better, yeah. Okay, and now, okay, I will not do this computation. But if you start from this atmosphere dynamics, so I just recall that you have that dx i divided by d just equal to vi. And that dvi divided by dt is equal to zero until so as long as xi minus xj is bigger than epsilon for any j from i plus the collision low. Then you can write an equation for this fp here which is exact, okay? That's something that's. And so for this, the grand canonical setting is very nice because you don't have small correction in the canonical setting. You have small correction which are due to the fact that the number of particles, the total number of particles is fixed. And so you have some small deviation which are just due to this. But in the grand canonical setting, you have a very nice equation which tells you that dfp times so the sum of vi grad xi fp. So as you hear, you recognize the transport part of the equation. So that's just the translation of these two equations here, okay? So if I put an external force or something like this, I would just change the transport here by adding another term in this moving part of the equation, okay? And then I have another part which comes from the fact that maybe I will eat the boundary of the phase space and then I will have a collision, okay? So now this term here comes from Green's formula, okay? And it corresponds to the boundary term. Boundary of d epsilon p, okay? And you have, okay, that's not exactly the boundary of, okay? And then here what you have is something like f, the sum from i equal one to p. And then you see that each one of this particle may collide with another particle, okay? So you have an integral over xi minus xp plus one equal to epsilon, okay? And then you have fp plus one of, so you have the zp, okay? So then you have x, okay? And then the right thing to be done, okay? Is that, and then of course you see that you have also, this xp plus one can be any of the remaining particles, okay? So essentially what I would like to explain is that you have a factor epsilon to d minus one which comes from this boundary here, okay? And you have a factor mu epsilon which comes from the choice of this particle number, this particle number, label p plus one among all particles, okay? So this tells you that you have a factor here, which is mu epsilon. I mean epsilon to d minus one. And so here you will see the inverse mean free pass, okay? So now I will just, just re-parameterize this, this boundary here. And so I end up with fp plus one of, okay, I have this zp and then I have xi plus epsilon omega. So this is the parameterization of the boundary. And then I have vi, vp plus one, sorry. I have dv plus one, I have vi minus v plus one dot omega dv plus one d omega, okay? So that is just the parameterization of the boundary. And then the usual thing that you do is to split this term into two parts depending on the sign of this guy, okay? So you see that this omega is just xi minus xp plus one. And according to the sign, you will have some particles which are about to collide or which have just collided, okay? So you split this term in two terms. So split the integral according to the sign of this quantity here. And then you write everything in terms of pre-collision also, in terms of particles which are about to collide, okay? Because if you think of just writing the dynamics, the way you are doing things is to prescribe the state of the particles after the collision in terms of the state before the collision and not the reverse thing, okay? So you decide here that you will just change variables here and to write everything in terms of pre-collision variables. And then you get a form which is very similar to the Boltzmann equation, where you have a gain term and a loss term. The only thing is that it's not factorized. Here you see that you have fp plus one, and here you have fp. And so you don't have a closed system, okay? But you really get something which is close to the Boltzmann equation, okay? So I will not rewrite this, but essentially you have alpha, okay? Maybe I should do it, okay? So sum, and then you have fp plus one. So I just try the two arguments which are important. So you have xi v prime i xi plus epsilon omega vp plus one prime, okay? So that's the case where this term here was positive, okay? And then minus fp plus one xi vi xi plus epsilon omega v p plus one, okay? And then you change the sign here of omega, and then you get this, and then you get the positive part of this guy. In this computation, we use that the number of particles is random, or we don't use it? Yes. If the number of particles is not random, here you see that, for instance, if you look at the case of p particles, then instead of having mu epsilon here, you have n minus p. And so you see you have a bit of correlation, which is due to the fact that now the number, of course, the total number of particles is fixed, and so the possible other particles are just n minus p. And so this kind of small errors are just accumulating. And so at some point, so if you would like just to write the Boltzmann equation, it's not a problem because, okay, you have essentially one time this error. But if you would like to write large deviation, then it's just impossible to deal with this small error accumulating. Yeah, so Grand Canonical is much better because here it's just exact. Okay, so that's just to tell you how you start with this projection. So I think it's really important that the first step in land force proof is this projection on finite dimensional space. And you see that now the ID and the proof of land force is really to prove that this fp, of course, it depends on the epsilon, will converge to something. And it's kind of strong convergence of f epsilon p. Okay, so let me explain how it works. But you see that really strong convergence in the sense that you will project and then prove some strong convergence on this projection. Excuse me, I have a short question. Yeah. I'm so in the equation we have one of the particles scattering with one particles that are not described by, I mean, that are not the one plus. Yeah. What is about the collisions? Yeah, so that's really important. You're right. Thank you for the remark. So this equation now is defined on d epsilon p. Meaning that if at some point two particles are colliding, then you still have boundary condition for this function that will be reflected. So that's really important. You're right that this transport here also it seems to be the same as in the Boltzmann equation is not the same because when you have two particles here among the p, which are already fixed, that see each other, then you have a collision, which is exactly the difference with the Boltzmann equation where if they are in the p particles here, they will just cross each other. So that's really the important point. So I will go back on this later on, but you're right that it's really important to say this. OK, so now let me, yeah. So presumably the f should be symmetric? Yeah, it's symmetric with respect to the p argument. But why do you need to sum over i then? Here? No, in the right hand side. Here because still it's symmetric, but you have collision on all particles. So because you have the sum here, it will stay symmetric. But of course you have to take into account collision with all particles. Because then you see that if you look at the sequence of collisions, you can have shocks with different particles. So that's really important to. So where I really use the symmetry is that when I multiply by this mu epsilon. Normally it should be the sum of our all possible pair of particles, one which is already in these p particles and one of the remainder. But I said that all the particles of the remainder are symmetric. That's really important. So now in the proof, there are two steps. OK, so you see that, OK, I will just forget about the precise formula for the collision parameter because it's not very nice. You see that integral over an hyper surface, which is of course dimension one. But you see that in the limit when epsilon goes to zero, it's actually of course dimension D, so it's really awful. So I will just forget about that, okay? And just tell you how you would like to do things. And then you see that there is a bit of functional analysis to be done here to understand what it means to write, what I will write, okay? So essentially what you end up with is something like dt plus this say, I call capital Vx grad xp, so that's the short notation for this thing. Fp equal a collision operator, which actually act on Fp plus one, okay? And now you see that I have a sum here. So this, if I forget about the size of the velocities, as you see that the velocities are unbounded. So this is not bounded, but I will forget about that. I will forget about the integrals, about the trace, about everything. But you see that essentially what this guy is doing is just to multiply, it's just like a multiplication by p, okay? Of course it's, you see that, it's a rough, rough estimate. But I think it's good to understand how things goes. So now what I say is that I would like to represent this thing and I say, okay, I will just do this to a mediterration, okay? It will be an exact formula. Maybe it's not very nice, but at least it's an exact formula, okay? So I say that fp at time t. So the first iteration is to tell you that I will transport. So I will denote by sp, the transport of this guy. So it's sp of t by two fp of zero, okay? So that's the case where I just forget about the right-hand side. I just applied transport, then plus the integral from zero to t of alpha. And I have the transport of t minus, say, t. So probably I would be, I would say tp, but I'm not sure that's true. And then I have cp, p plus one, and then I have fp plus one of s. Okay, for the moment I will just call this s, okay? But then I'm not really happy because now I have this fp plus one, okay? So I just redo the thing. And then if I do this an infinite number of times, I end up with something like this. So this will be the sum from, I'll say, I will call this n, call zero to plus infinity of an operator that I will call q of p, p plus n of t of fp plus n of zero. Okay, and this guy is just a sequence of, so I just do sp of, say, t. So I'm sure that I will not take the right things, but okay. tp plus one, okay, probably it's t minus t plus one. I don't know, I think it would not be good, but okay, let's call it tp. And cp, p plus one, okay, I think it's p plus one. Okay, we'll see. Then I just transport p plus two, etc, cp, p plus n. And then I have s, p plus n of t minus the sum of all other times. Okay, so I transport, then I add a new particle, then I transport, then I add a new particle, and so on, okay? Okay, and then I have a big, big integral here over the simplex. p plus n, okay? Okay, so now if I just try to evaluate the size of this guy, okay? You just remember that here is just like the multiplication by p, okay? And this now you have a simplex in time, and so this will be like t to the power p plus, okay, t to the power n divided by factorial n. Okay, so the side of this simplex here is like t to the n divided by factorial n, and then this guy, okay? The transport is a very nice operator, which essentially conserves all reasonable functional spaces that you can imagine, okay? So you just say that it's just multiplication by one, and then I multiply by p plus one until p plus n. And so you see that, okay, this is not, say, up to a factor two to the power, I don't remember, p plus n, something like this. You recognize here it's just like factorial n. Okay, so these two guys here up to something which is a geometric factor. They are essentially of the same size, and then you see that in order that you can sum everything, you need that t is, say, smaller, much smaller than one, okay? So that's the reason why you have this short time restriction, is that you just do that, and then you end up with something which is convergence only for short time, okay? So that's the first, the first. And so just to go back to what I tell you about the sampling and so on. You see that the problem here is that you cannot really discard the fact that you have a lot of collision. Of course, you don't expect to have so many collisions, okay? Because essentially you expect this branching process to be like a very nice branching process with the rate which is given here. So it should be like exponential, okay? So it should not be growing like crazy. But say with this kind of estimate, you don't see this exponential bound. And you see that it's even worse because you see that these collision operators, they have actually a gain part and a loss part. And of course, when you are at equilibrium, both terms will just compensate. But here I just forget that I have a minus sign, okay? So that's really, very bad, okay? But say we don't have any alternative to do that, okay? So that's bad because of, okay? So that's bad. You have no control on the growth, which we expect, okay? Actually, we expect something like an exponential growth and not super exponential and no compensation between gain and loss terms. And that's really bad because you see that even at equilibrium, if you look at the land force proof, you see that even at equilibrium, you get the convergence for a very short time. Of course, then as everything is constant, then you can redo it. But you see that it's very, very bad, okay? And then you have the second argument, and I think I still have five minutes to a bit more, seven minutes to explain the second argument. Can I just ask you when you say very short time, can you make that a bit more quantitative? Is it up to a small constant like a land force proof or arbitrary but constant times? No, it's, so it tells you that if you have this parameter alpha, you get a time which is like one over alpha, one fifth of one divided by five alpha or something like this. So it's like, it's really the proof of land forward. And so it tells you that actually you have less than one collision per particle, in average. Of course, you still have a lot of collision because you have a lot of particles. But in average, you have less than one collision per particle. So it's really bad. And of course, with less than one collision per particle, you have no chance to reach a fast relaxation. That's because you need a lot of. Okay, so that's the setting to have the estimate. And then you have the second argument in land force proof is this geometric thing. So now I go back to this Q operator here, and I just try to understand what I'm doing. Okay, and then we will see that actually with a geometric interpretation of this, it's not very difficult to prove the convergence. Okay, of course, once everything is, once the series is absolutely convergence, you just need to look at one term and prove that each elementary term is converging, and then you are fine. Okay, so you start with this. So you would like to look at the distribution of p-points like this. Okay, so that's p particles at time t. And so what are you doing? You first transport these particles during a time t, p plus one. So that's very bad actually, but okay. Okay, so I have a transport like this until at some point, you see that I have collision of one of these particles with another particle. Okay, so now if I would like to know the history of these particles, then I need to, so this is time p plus one. At this time p plus one, I add a new particle here. Okay, and of course I have to prescribe, so I have to prescribe first of all the colliding particle here. Okay, so what I have to prescribe. So here you have a bit of combinatorics, because you have to understand the collision tree. So the label here, so I need to put the a p plus one. Okay, so that's the label of the colliding particle. So this is just an integer between one and p. Okay, but then I also have to prescribe this time here for the tp plus one. I need to prescribe the velocity of this particle here and also the impact parameter. Okay, so I have to prescribe this guy the vp plus one and the omega p plus one. Okay, so you see that now I'm not really describing a real trajectory of particles because I really start from something which is a projection here and so now what I'm describing is backward dynamics, but this is not a real trajectory because I add particles and some parameters here are just integration parameters. Okay, so that's not a real trajectory, but still it's like a dynamics. Okay, so it's not a real trajectory just because of a projection, but still it's dynamics. So we will call it a pseudo trajectory. Okay, so now I do that. So now I have this p plus one. Now I type p plus two. I have another collision like this, like this, like this. Okay, and then you do that until time zero. So at time zero you have your n plus p particles and actually you have a representation here. What I say is that somehow I can represent fp of t as an average of all possible histories. So possible history is the first thing that I have to prescribe is n, the total number of branching. But now if I look just at one element free term, it is fixed. Okay, and then I have to prescribe this collision tree and all these parameters from p plus one to p plus n. Okay, now I have big average over many things, but of course what is very good is that I know the probability of this configuration at time zero because at time zero I know everything. Okay, so this is just a weighted average over the initial configuration. So of course this is very complicated and it's not much better than the equation at the beginning. So what I have to do now is to compare this with what I expect to be the limit, so the solution of the Boltzmann equation. And that's exactly the point that you mentioned before. So in the case of the Boltzmann equation you have the same kind of representation. Okay, so for the Boltzmann equation you have a similar representation with of course there are some small differences. Okay, so the first difference is that when you add a particle here you see that particles have a size so you see that they are at a distance which is of the order of epsilon. Okay, so you have no special shift when you are looking at the Boltzmann equation. But you see that, okay, as n now is finite if you have a special shift of size epsilon it's time you have a collision in the end maybe you are at distance n times epsilon but if you have a nice initial data here then you are fine. Okay, then you have a second problem which is not really a geometrical problem but which is here the initial distribution here. Of course because of the exclusion it depends a little bit on epsilon. Okay, so you have the exclusion in the initial say you have no exclusion in the initial data. So it's a bit different. You have exact factorization for the Boltzmann equation in the limit but it's not the case for fixed epsilon. And then there is this very bad thing which is really different and this is what is complicated is that in the Boltzmann case maybe you see so if I'm not shitting too much and these two particles here that will just collide. Okay, because just say they go straight and after a while they just collide. So of course in dimension with this kind of representation on the blackboard all the particles will collide but say in principle because of the scaling the gaze is really dilute and this kind of event will happen with negligible probability. Okay, so what is really really bad is what we call recollisions. So when two particles which are already so not a new created particle but two particles which are already in the system will collide. Okay, so that's really really bad because then you see that the Boltzmann equation there will be a deflected something like this while in the Boltzmann equation they just cross each other and they go straight and then the trajectories will be really far from each other. Okay, so here you see that this recollision then you cannot compare you have no coupling between the trajectories. No coupling. And so what you have to prove is that this will happen with very small probability. Okay, so you prove that you have a coupling for almost all parameters. Okay, so for a set of parameters of probability almost one you get the coupling between the two trajectories and sometimes there is something which is bad. Okay, so that's not a problem at the level of the low of large numbers because you don't care. It's just a small contribution. So that's really important because so they do not contribute to the limit at the level of the low of large numbers. But of course this recollision will be really important when you are interested in the fluctuation. Actually the noise will typically come from this recollision so it will be really important in the sequel but in Landford proof they are just bad and you just throw them away. So that's really important and this is really something that you have to keep in mind is this representation of the solution with these trajectories because then we will really play with this so that these dynamical clusters actually in statistical physics in equilibrium statistical physics what you are doing is just clusters of points like this and now we will do clusters of these big trajectories these big dynamical things and so that's really important to have this in mind that we can represent everything by this geometric picture. Okay I think it's time I'm already late sorry and thank you very much for your attention.