 Okay, so maybe before I start with new material, I should recall a little bit where we stopped last time. So actually, what I presented last time, where the main arguments are flattened for proofs, okay? So actually, there were three important ingredients, okay? And these ingredients, we will, of course, still use them in the next developments, but we will see that actually they are not enough. So the first one was to project on the finite dimensional subspace of the phase space, okay? So that's just this story of looking at correlation function that were also introduced by the quantum level in the other courses, okay? So that's really the idea of a finite dimensional projection, okay? So then we wrote the equation for this correlation function. I will not rewrite everything, but this is what is called the BBGKY hierarchy. I should at least write at least once this thing, okay? So then, as was already done in many situations, also for this random Schrodinger equation and for many other things in this limit of system of particles, we use this iterated Duomen formula, so we have this series expansion. And I recall that this series expansion is responsible for the short time restriction, okay? So that's why we get this short time restriction. So this is just the iteration of Duomen, okay? And then the next and actually last argument was this geometric representation of each elementary term in this series expansion, okay? So we have a lot of terms. And so essentially the index here is just the number of particles that we have to add to just trace back the history of one particle, okay? And so this geometric representation is very useful because it allows to compare the original dynamics with fixed epsilon with the limiting dynamics where epsilon is equal to zero, okay? And then we get the Boltzmann equation, okay? So the last argument here is geometric representation. And one thing which is important is that, of course, these are not trajectories of the real dynamics, but just pseudo trajectories that just a way of just understanding this series expansion, but this doesn't correspond exactly to real trajectories of particles, okay? So now you see that, so that's good because actually with these three arguments what you can prove is that for short time, the Boltzmann equation can be obtained as the law of large numbers for this system of odd spheres, okay? But of course this short time limitation is very bad from a physical point of view because I maybe have to mention that say the typical time that you can reach here is just a fraction of the mean free time, okay? So essentially you don't see, you just see a couple of, so less than one collision per particle in average. So that's still a lot of collision because you have a lot of particles. But you see that it's not really a collision model and in particular you will never see relaxation towards equilibrium and fluid limits, okay? So that's really the main problem with this proof by Landford. And so there is a very natural question which is to extend this convergence for longer times, okay? So that's the question that I would like to rise now. And maybe before I start with possible answer in some particular case, I should explain why it's not so obvious to remove this restriction, okay? So actually I mentioned it, but you see that this proof actually doesn't see the signs at any point, okay? So all the terms, so some of them have minus sign or there are plus sign. But you just deal with them, so when you try to have this a priori estimate here, you just look at all the terms and take plus sign, okay? So this, so a first, so the first problem is that we do not see any compensation between gain and loss term. So compensations between gain and loss terms are not taken into account. Now if you look at the Boseman equation, which is the limiting object, okay? And you just forget that you have a minus sign, okay? So you just write the gain term, for instance, or just the sum of the gain term and the loss term. Then you see that what you have in X is just in the right side, okay? Maybe I should rewrite this equation. So you have a term with the transport, but say for the moment I just don't care, okay? So you have something like this, okay? I just forget about this, this will not be really a problem here, okay? You just can conjugate by the transport operator and that's fine. And then you have this collision term. So I just recall that it's something like, so I should write xv prime, f of xv prime one minus f of xv, f of xv one, okay? And then you have this cross-section v minus v one dot omega dv one d omega. Okay, so I don't care about the problem of large velocities here. Just would like to focus on this term here. You see that that's just a product in X, okay? So what you still have in mind is that this equation actually is something like something much simpler, which is like dTu called u square. If you forget about the sign here, you see that you have just a product. And so you have an equation like this, okay? And this equation, of course, you know that it will blow up for finite time. And actually, this land for time is more or less this time where this equation will blow up, okay? So if you don't take into account the sign, then essentially you are dead, okay? You cannot go for longer times, okay? And so, and actually a remark is that even close to equilibrium, you are not able to take into account the fact that you are at equilibrium. And that's actually this, now the composition is just exact, okay? So that's really the bad thing with this proof here, okay? And so it's not completely clear how we can improve this kind of argument, okay? Say one thing which is really classical in problem of single capitalization in analysis would be to try to use the fact that you know something on the limiting equation, okay? And to try to use this structure just to understand better the original system. But here it's complicated because you cannot expect to have any strong convergence. You see that the Boltzmann equation, so that's the next problem actually, is that the convergence cannot hold in a very strong sense, okay? So you cannot choose kind of stability around the limiting system, which would be the case actually for mean field. So for mean field, you can imagine to have entropic methods to prove the convergence. Here you cannot have something like this, okay? So convergence cannot hold in a very strong sense, so something like entropic. And the reason is it's just simple. So you see that of course you start from the Hamiltonian system. So all entropies, all quantities like this are just conserved, okay? If you just look at the entropic, it's conserved. And this is not the case for the Boltzmann equation, okay? So you don't expect the two systems to be close to each other in entropy because one has constant entropy and the other one has a decreasing entropy, okay? So that's not just possible, okay? So this means that actually when you do take limits, you really lose a macroscopic part of the information. And so any strong convergence like this is actually forbidden, okay? So that tells you that it's not clear how you can use what you know on this Boltzmann equation, okay? So maybe a last comment is about what can we do just to prove that this equation a solution, okay? Because maybe it's an idea to try to use the same kind of things at the microscopic level. And so actually you have two types of solution for this Boltzmann equation, okay? So the first one is just say a smooth solution close to equilibrium, okay? And then a very important thing is that you have that the linearized operator is actually a contraction, okay? But then you see that you are back to this problem that at the microscopic level, you don't expect to have any contraction, okay? So that's bad. And say the other, so that's the first, so actually you have three ways of constructing a solution to Boltzmann equation. The first one is to do something like this, okay? And that's exactly what you do when you construct a solution in Land-Force proof, okay? The second one is close to equilibrium to use this kind of smoothness plus this contraction. And the third one is the one introduced by D. Perna and Nions using renormalization, okay? But then it also relies very strongly on the entropy inequality, okay? So that's really something that you have to understand is, say, what will play the role of this information, of this entropy? And so that's, if you would like to understand something in the nonlinear case, then you really have to understand this question of information in the microscopic system, okay? So that's not what I will do just because I don't know how to do that. I would like to be able, but I don't know, okay? And so here what I will do actually is to look at a simpler problem, just close to equilibrium, okay? So now what I, so it's the third part and I will introduce a weak convergence method. And what is important here is that it's close to equilibrium. But maybe there is, okay, I still hope that we can maybe use this notion of weak convergence even a bit further from equilibrium. But that's another story, okay? So here the idea is that, okay, it's too complicated to look at this correlation function. They are very bad and actually we saw last time that the two problem that can arise are the fact that you at some point you have too many collisions. And that's the reason why maybe the theories will not be convergence, just because the I order terms will be too large, okay? And the other problem comes from this geometric representation here. So that at some point we can be very far from the Boltzmann dynamics, just because we have this kind of loops or cycles in the collision graph. And then that's another reason why this correlation function may be actually bad in the sense that they will be far from the limiting object, okay? So here the idea in this weak convergence is to just forget about correlation function and just look at moments, okay? So here now as we are close to equilibrium, of course we are not interested in just the law of large number, but at the next order correction, which is the fluctuation field, okay? So here what I will look at are the moment of the fluctuation field. Of course, if we would like to do something like this, say out of equilibrium, the first stage, first step would be to look at moments of the, just of the moments of the, just the expectation, okay? Just moments of the empirical measure, not on the fluctuation field. But here, so the relevant quantity close to equilibrium is the fluctuation field, okay? So what does it mean? It just mean that, so I just recall the definition of this fluctuation field. So this is a random field which is defined by duality. So meaning that I look at its action on any test function, okay? And this will be just a square root of mu epsilon, which is the times 1 over mu epsilon, the sum for i equal 1 to n. So I recall that mu epsilon is the typical number, the average number of particles, and n is just in one realization, the number of, the total number of particles. Because I am in a grand canonical setting, okay? So the i of t minus the expectation under the same, under, so here it would be the Gibbs measure, okay? Of this, this h, okay? So here, I look at everything under the Gibbs measure. And I should recall what it is for those of you who are not here last time. So it's just, you have this normalization function with, which is the partition function. And then you have mu epsilon to the k divided by factorial k. Then you have the exponential of minus one of sum of v i square. Maybe I should call this n to be a square. And then you have, so this just encodes the special exclusion that particles have to be at the distance epsilon from each other, okay? So that's the fluctuation field, okay? So that's the empirical measure. I just subtract here the expectation. And then I normalize by this factor of square root of mu epsilon. Since this is what we expect to be the relevant process to have a limit when mu epsilon tends to plus infinity, okay? So now what I say that I will not try to describe this correlation function. But just moments of this quantity here. So just, I'm interested in the expectation of just a product, a finite product of things like this, okay? So t1 of h1, epsilon t, say p of hp, okay? I take products of this, and then I'm interested in this quantity here, okay? So you see that it's a weaker of somehow it's weaker because we don't need, so you know that this notion of convergence actually, we had a lecture yesterday about this different notion of convergence. But you see that this is, somehow if you take just one time or two times it's much weaker than the convergence of correlation function, okay? Because you see that you can have a lot of pathological things which happen but with probability which is almost zero and so they will not contribute to the expectation. So you don't want and don't need to describe them, okay? But on the other end, it's a bit better than the correlation function. Because you can look at a lot of different times, okay? So if you would like to look at the convergence of the process, of the whole process, of course it's better than just looking at one time, okay? So that's say, it's a weaker convergence. But on the other end, we have many different times. And so, which is important for the convergence of the process. Can I ask if you know this for all p, the convergence? Yes. Say, just at the same times, would that imply the convergence of correlation functions? If, no, no, you cannot go back to the correlation function. And actually, I don't think that's the convergence of the correlation function for this very large time is true. Because we really remove a lot of things. You don't think it's true for, no, I see. At least you see that, okay, maybe I should have said in which a functional space this convergence hold because of course this everything, of course in L1 I guess that it will be convergent because more or less, you see that, okay, if I just put here two times, then more or less it will be tells you that, say, it's essentially the integral of the correlation function against this function. So it will tell you that in weak L1 or in the sense of measure, you get something like the convergence of this correlation function. But say, usually in Landford proof, this is not at all the kind of functional space that you look at. So you look at at spaces like L infinity with exponential weight, which are much stronger. And here, I would say that, okay, if you just take two times, because of course here as you are under the invert measure, just one time is not very interesting, everything is invariant. So if you take two times, it will tell you something on the correlation function. And okay, my impression is that should tell you something like convergence in the sense of measure, which is much weaker. Okay, so that's- Sorry, I kind of, I think I missed one point. Yeah. So the zi is the trajectory, right? Yeah, a real trajectory of the, yeah. And we are, so we are, so we are grand canonical, so- Yes. Why is the sum only going to n? Because in the grand canonical setting, so n here is the total number of particles. So this number of course, so you have different, so you have actually a superposition of initial data with different number of particles, which is given here by this capital N. But you see that for any realization, this sum is going until n. So this n is a random variable, it's still a random variable here. I still remember. And then I assume over all possible n with this, this measure here. But say for one realization, n is n in 6, I have just a finite number of particles. So, the correlation functions are strictly stronger than this? Or you can reconstruct this expectation, or you have the problem that at different times you cannot actually, I mean if you know all the correlation functions, do you know this expectation? No, not the usual correlation function. And then you can introduce some bias correlation function where essentially each time that you have something like this, you introduce a bias on the correlation function. And then you can say something, but these are kind of very generalized correlation function. General correlation function that just go from one time, where you have a distribution to another time. But of course, if you just do this step by step and each time you reach the end of step, then you introduce a bias. Then, of course, this is still a correlation function. But this is not what people usually call a correlation function. Okay, so that's the quantity that I will call IP here. Okay, and now of course, okay, I said that I would like to study this quantity, but I have to do something, okay? So the idea now is to use actually not to just forget about all this. But just to use this on very small time intervals. Okay, because still it's a very nice description of the dynamics. Okay, tells you something on what happens on small time intervals. Okay, but actually not time intervals of the size of Landford, but even smaller time intervals. And then I will use this elementary step and just do a lot of iteration. Okay, so that's really the thing that we have in mind. Okay, so let me just explain. So I have this, I start from this, okay? And I would like, okay, to have all, of course, if all the time here are equal, then because I compute everything under the invert measure, then it's like computing everything at time zero. And then I know very well what happens. Okay, so if all times are equal, then I'm very happy because this is just statistical physics at equilibrium and this is very well known for a very long time, okay? Because you just have to understand the role of the exclusion. But what you expect essentially is that with the scaling here, with the Boltzmann grad scaling, the exclusion actually is something which is negligible. So you can just imagine, okay, it's not true. Of course, you have a lot of work to do. But essentially what you can have in mind is that if all times here are equal, okay? Then you can just forget about the small correlation, which are due to the special exclusion, okay? To the stationary exclusion. And then computing this thing is just like computing this under the Gibbs measure without the exclusion, okay? Then this is something that you know very well. This is just everything is Gaussian, everything is, okay? So now the thing that I would like to do is to just pull back, actually this observable in order that everything is at the same time, okay? So that's the goal is to pull back the fluctuation field in order to have just one time. So I can start with just I2, which is the covariance, okay? And then of course the method will be more or less the same. So it's just a covariance, maybe I should call it covariance. So of say H1, H2 and of course the only thing which is important is just T2-T1. Because everything is invariant by this special translation, which will be the expectation of zeta, say 0, H1, okay? T1 and T2, but that's really interesting. Okay, so now what I would like to do is to pull back this one until T2 is equal to T1, okay? So that's what I would like to write, okay? That's its equal to zeta of H1 times something which is a bit more complicated, okay? But which still have the structure of fluctuation, okay? And that I will define right now, but which has of course more go back on this, okay? So now you see that say here, I didn't use at all the fact that I am at equilibrium, okay? So this could be true for any distribution, I don't use here that I am in equilibrium. Now I will use that I am at equilibrium just to control these remainders, okay? So that's really important, here I will use the invariant measure and time decoupling, okay? So that's maybe the first thing that I can do is to explain what happens in the case of this conditioning, okay? And actually we will see that essentially each time that I have something which is pathological but localized in time, then I will be fine, okay? So you see that what I would like to say is that actually this expectation, okay? Here I should have written, so in order that it's an equality, it's equality, I have to add this conditioning, okay? So you see that what I am able to do is to pull back the observables probably did that I have this conditioning and so I have to understand whether this conditioning will modify the... So what I would like to look is as something like this and prove that this is almost zero, okay? Because this conditioning it was not in the original variance, I added it in order to be able to use this duality but say it's not something that was present in the original covariance, okay? So now what I say is that what is very, very good with this invert measure you see that of course it doesn't depend on time, that's the definition of the invert measure and so what I can do is just to use the alder inequality for this and so what I would say is that this will be smaller than the expectation of T times T1H1 say to the power of 4 for instance, 1 fourth, the expectation of this other guy here and now the expectation of 1 minus this guy to the 1 of, okay? So what is very, very good here is that you see that now you have just one time, okay? So estimating this thing is simple, it's just this estimate at say under the equilibrium measure of the fluctuation field it's stationary, okay? So that's actually if H1 is in L infinity for instance, this is finite depends only on say H1 in L infinity, actually probably it's not optimal even but okay so that's if you have a nice test function then that's fine, okay? The same for this one of course and so the only thing that I have to do is to prove that this guy will be small, okay? And so that's of course now you have a geometric lemma here to prove that this microscopic cluster cannot be too many, okay? And so this will converge to zero so with this additional condition that's and actually not only this one but so now you see that you have to remove all these small clusters at a lot of time because you have 1 over delta times where you have to remove these bad things but even with this 1 over delta this will converge to zero provided that you have chosen your gamma in a suitable way, okay? So for this gamma I think gamma is going to 40 but if I don't give the proof it's not really useful to give this gamma but okay for gamma large enough you can prove that this will converge to zero even with this 1 over delta, okay? So what I would like to show you with this estimate is that's actually that it's really simple to control remembers provided that you have this other inequality you use then the invariant measure and the last thing that you use is that say the pathological things happen for its localizing time, okay? So what is important is that the pathological behavior is localized in time so here you see that I just have my time interval here so this is t1, this is t2 I just have a lot of so small interval of size delta and at each of this time I just remove all these microscopic clusters which are bigger than gamma plus 1, okay? And so what I say is that if gamma is large enough then of course you see that having a cluster like this it's really complicated and so then you see that, okay? Okay so that's the point and then you say that say each time it's localized because it's at one time, okay? And then you take just a unit bone and then you're done, okay? So now you see that I would like to remove other things which will be more complicated and so typically there are two other things that I would like to remove so on this very fine small scale what I would like to remove is also the fact that kappa can be different from zero so that I have recollision, okay? So then I have to prove that the contribution of this term with the kappa which is not zero is also very small, okay? So that's very simple when it's the conditioning but I would like also to remove kappa different from zero before I iterate, okay? And so there is one thing that I have to do is to estimate so now you see that essentially what I would have to do if I do exactly the same thing here so when kappa is different from zero I would like to estimate this covariance here and so you see that if I use a Cauchy-Schwarz inequality what I have to do is to look at the variance of this field here and to prove that it will be small if kappa is not zero, okay? So that's what I have to do, okay? And so here you see that I will not do that because it's really, really technical but I just would like to mention a couple of things about this, okay? So about the control of recollisions, okay? So that's where the proof is really different in terms of functional spaces than the proof of Landford. So you see that in the proof of Landford what you have to prove is that essentially this recollision has a small contribution in L1, okay? Because you see that you just look at this F and then it's essentially what you have to prove is that they will contribute, they will have a small contribution in L1, okay? And now you see that what you would like to prove is not an L1 bound but a bound on the variance of this guy which is more like a L2 bound, okay? So that's the first thing. So what we have to look at is something like this guy. So when I say bad it's just because it has recollision and what I would like to prove is that this guy, square, will cover to zero, okay? So the first remark is that this is more complicated than proving the same for the expectation, proving the convergence of the expectation, okay? Now if you remember how I construct this phi bad. So I have this trajectory with recollisions. So there is one geometric argument which tells you that essentially you have a graph. If you represent the graph, something like this. See that if you have no recollision, you have a tree like this, okay? And so that's a period that would be of the order of one. But now if you have a recollision, something like this, so here you have a recollision. You see that now you have a loop in this graph, okay? And this implies very strong geometric constraints, okay? Because you have to remember that the size of the particles is epsilon and so for two particles that you know to have a recollision, it's a very strong constraint, okay? So what is important is that the geometric constraint associated to the loop provides some smallness. So this part is really the most technical part of the paper and it's not really interesting but okay, you have to compute everything here. Essentially you gain something like epsilon, okay? But so because you have to estimate here the L2 norm and not the L1 norm, there is one thing that I didn't mention here but which is really, really important that when you do the Duamel formula, you see that you de-symmetrize very much the system. Because you say, okay, I look at this trajectory here, okay? And then this particle number one will be the first, number two will be the first which collides with one. Then particle number three will have the collision here and so on. Okay, so you see that by using this projection, actually you introduce a very strong de-symmetrization in your system. Okay, and this is not good. So in L2 you don't care. In L1 you don't care but in L2 it's very different. Okay, because here you see that when you de-symmetrize, essentially what you do is to introduce another, okay? And then you completely at this stage here when you look at Zm, say Z2, Z3, Zm are not symmetric at all. And so in all these things where you would like to prove, you know, low of large numbers, central limits, CRM and so on, the exchangeability is really something which is really, really important. Okay, so one thing that you have to do is that before you do all this estimate here, you have to go back here to the way I defined this phi and to symmetrize everything. Okay, so that's really something which is important. In L1 you don't care, okay? Because this is really the same. You don't care about the symmetrization. But here I will not do the computation but just trust me that there is a very, very important symmetrization of the forward flow. Okay, so the symmetrization is something which is really easy. So you just take your phi, okay? As everything else is symmetric with respect to all particles, you just need to symmetrize over all particles. So you just take one over factorial n. You just take then all possible permutation and then that's fine. You have something symmetric. So apparently in L1 it doesn't change anything. But then when you compute the L2 norm, because you... So maybe another way to explain why this symmetrization is really important is that actually when you symmetrize you will touch, say, a very different part of the phase space. Okay, so imagine that you are just in dimension two. Okay, so by the... By, say, by the Duamel formula, you obtain something which is not symmetric or something like this. Okay, so you have that x1 has to be, I don't know, less than x2. Okay, but of course you say that, okay, but then it's with another thing at home which is like two. Okay, well I say that it's much better to just replace this by the whole thing. So this plus this. And now I have one and one. Okay, I just exchange x2 and x1. And you see that now if you compute the L2 norm of this guy, it will be less than the L2 norm of this one. Of course, in dimension two it's not really different, but if you do the same in dimension n, then you get a factorial n, which is not... Okay, when you would like to re-sum, it's not just a detail. Okay? Okay, so that's really, really important here. And I think it's something which is really a bad feature of the Duamel formula is that you have really this important symmetrization. Okay, so if you would like to handle the thing in different functional spaces you have to retrieve this symmetry. Okay. So that's what I would like to say about the time decoupling and the invariant measure. So what I say maybe... No, that's the way we control the remainders. And now I just have to explain more precisely all types of remainders. Okay? So that's the way... Actually this... You really need to understand this. This phi as... So essentially in L3 norm they are not so bad. They are like mu epsilon to the m divided by factorial m times a sum of just indicator function. Okay? So that's really important because we have this indicator function. It's really better to try to symmetrize as most as possible. Okay, so now let me explain the iteration. Okay? Because I'm not exactly... So at the very beginning I said that... So just looking at the covariance I was interested in just pulling back the observable from t2 to t1. And for the moment what I managed to do is to pull back it from t2 to t2 minus delta which is not exactly the thing that I would like to be able to do. Okay? So I have time like this. So now I will call this t and zero because it's not really important. That's t1 and t2. Okay? So now I have put back the variable here, t minus delta. Okay? And so what I said is that... You see that the combinatorics of the thing because of the recondition index can be very, very... Okay? I don't care. Okay? What I get is that I have a kappa to the power... say a kappa to power a number of particles. So even though the number of particles remain bounded you see that I have a power of kappa here. But then if I just do the whole thing one over delta times I will have a constant kappa to the one over delta. So remember that delta is just like epsilon. And of course you cannot do anything with this. Okay? So the first sampling... So actually you have a sampling at scale delta. And here what you do is that you remove... so you do the iteration. You pull back. But then you remove all the contribution where the kappa is different from zero. Okay? So we remove all contributions with kappa different from zero. Okay? So this means that of course you have a remainder term that you have to control according to the previous strategy. Okay? But you can have this kappa only for one small time. Okay? So maybe... And then you just iterate. So you have t minus 2 delta. So here you have a remainder. Okay? Which is a five recollision which correspond to this loop. Okay? So this is... This will be small because of the geometry here. Actually it would be of the order of epsilon delta to the one half. But then you have one over delta of them. And so the total... Okay? So this would be of order epsilon delta to the one half. Okay? But this... Then you will have one over delta terms like this. And so you see that you have epsilon divided by delta to the one half. But this is still small. Okay? Because delta is a bit bigger than epsilon. Okay? So I just do that. I have another free loop here. Okay? But then there is another problem. Okay? So I can just try to iterate these things. But you see that there is another bad behavior that I had mentioned when we did this series expansion in the Duamel thing. Is that maybe you have also a growth of your collision trees which is not admissible. In the sense that it would be super exponential. Okay? So then you have to introduce another sampling on the time which is bigger actually because of course essentially on the time delta in fact it's just one collision. So you will not be able to measure the growth of the collision trees. Okay? Either you have one or you have zero but it's something of the order of the unit and so it's very complicated to control the growth on such small intervals. Okay? So that's too small to control the growth. Okay? So I need to introduce a time scale which is a bit bigger which is tau. And this sampling at a scale tau is to control the growth. So that's okay. So remove. Super exponential. Okay? So maybe I should just recall or say something about the... So typically what you expect is that say the probability of having say M collision in this small time interval will be like delta to the M. Okay? So you see that so it's not a problem with delta so small to some for all possible M. Okay? Now for tau, okay, it's not a problem if tau is still small it's not a problem of controlling tau to the power M for any M. But you see that if I don't remove the super exponential growth then for this big time here I will expect that the contribution of trees of size M will be like T to the M and that's not possible to some. Okay? So I cannot do that. I really have to control the growth as I will not be able to reach time T which is much bigger than one. Okay? So that's really... I don't have the choice here to do that as I have divergence which would be say if I do not... else I would have a growth like to one of delta which is not admissible. Okay? And if I do not do this second sampling what I would have is a growth like T to the M and this is not possible to some. Okay? So I cannot choose to do that or not do that if I would like to reach big times then I have really to truncate all these bad things. Okay? So maybe I can explain a bit this super exponential thing because it's not too complicated. Okay? So I will have all this time tau. Maybe I should have this one. Okay? So what I do is that... So I will have T minus tau, minus 2 tau, et cetera. And what I will ask is that the M, so the total number of particles that I've created by these branchings will be smaller than... so the total number of particles, so particles created so that's the number NK created on T minus K minus 1 tau... say K tau... T minus K minus 1 tau. Okay? So just remember that if you start with the... so the trajectory is like this you have a number which is the number of branching in a tree. Okay? And so then I have this on the first time here and then I start with this. So this will be N1 and then I will have N2 something like this. So it will be, of course, bigger and bigger but what I would like to do is to be able to say this guy will not be more... so the typical number of branching is not too large and so what I will ask is that NK is smaller than 2 to the K. Okay? Which corresponds to the exponential branching. So I will not explain all the details but the ID here... sorry, that was a question. So the ID here is that... okay if you have something like this then you see that say the growth for the first time interval okay maybe if you are quite far here it would be something like T okay if you are a good tree until interval K okay you know that's the total number of branching is this big time here so imagine that I'm here so here I have a large number so what I know is that it's good on this first time interval so it will be smaller than 2 to the... 2 plus 2 to the 1... 2 plus 2 to the K minus 1 okay so that's the total number of particles that you can create in all these intervals here if you are a good tree okay so that's the cost of something like this is like T at this power okay and this is essentially like 2 to the K okay now if now you are bad in this time interval this means that the total number of branching here will be like tau to something which is at least 2 to the K at least actually you sum over all possible N which are bigger than 2 to the K so essentially it's equivalent to something like this okay so now if you look at this thing you see that it will be something like tau times T at the power of 2 to the K so you see that this contribution can be made as small as possible provided that this tau times T is less than 1 okay so that's the way you control this bad growth is to say that okay I stop as soon as I have an interval here such that NK becomes larger than 2 to the K okay so that's the second and this sampling actually is very reminiscent from the sampling in Nair de Chaniao okay so I have this double sampling of course it's a bit complicated because I have a lot of remanders but say the idea is always the same is that essentially this estimate here can be made in N1 but also in L2 and so I can control the variance and so you see that each time it's something it's really localized in time and so I can just discard all these pathological behaviors yes about the recollisions I have an inflection so it seems you are handling the recollisions in two stages first you say that you don't have too many and then you say that so is there a reason why you cannot do directly just saying they don't contribute because you first say that you have this conditioning that you don't have because you see that this geometric thing I'm just able to control it once I have already projected on finite dimensional space because it's important that in this maybe it's possible I don't know actually we are currently trying to rewrite everything without using all these two amelsing and so the trajectory is just looking at real trajectories and maybe it's possible to do that something like this but somehow we are really biased by the fact that the traditional way of doing all these things is to use these two amels and so to control these kind of recollisions on finite dimensional spaces but maybe it's possible to do directly with the... I don't know there is you know the weight of history is of tradition so it's complicated to change our mind maybe it's possible just to do in one step I don't know Time sampling you also use this trick of time decoupling and the Cauchy-Schwarz so this is also possible away from equilibrium now everything here so that's we use this so for all of this we use this time decoupling so you see that I don't care at all of say the fluctuation field at time zero or if I do this with higher moments I don't care of this fluctuation field this fluctuation field I just say that I control them by order and then I'm just concentrating on this pathological time where something bad happens so it's really really important here that we have the invert measure what I hope is that maybe in the non-linear case you don't have a control with this order but maybe with something like an entropy inequality okay that's the hope but okay that's just science fiction but that's important to have also a bit of science fiction sorry this was a bit fabulous so are you ultimately saying that the probability of this super exponential growth is small? yes and this was just a bit fast so did you say how or did you just claim it? no I say that I can prove that the contribution of all this is small because so what I do is that each time I reach a new interval of the size tau just checked if the thing is nice then I will iterate else I will not iterate and I estimate the term and this remainder terms is of this order here okay so if I choose tau to be sufficiently small of course not of the order of epsilon but you can imagine something like one of a log or one of a log log or something like this you see that this guy will give you a geometric series and so all this contribution will be negligible okay so but but you see that I will not iterate the whole thing so once again I have no result on the correlation function because I destroy so many contribution that in the end I don't know I cannot say something on the correlation function okay so I have still a bit of time to try to explain the case of higher moments and especially this how we get this fixed rule for to prove that the limiting field is Gaussian okay so what I say is that at this stage if you just use all these arguments so the duality plus the sampling plus the estimate of remainders so one thing which is a bit technical is to to estimate the variance of all these bad terms but okay this is something that you can do it's a bit like competition so what is not really fun is that you have also to take into account all these small corrections due to the spatial exclusion so there is a bit of combinatorics here but really if you symmetrize the thing use this kind of arguments here for the smallness then you can prove that all these remainders are not too big and then you end up you really manage to have this iteration and just pull back the observable from t to zero and then you see that you have the formula for the covariance okay because then you see that you have all the the main terms of the Duamel formula of course you can prove that the same are the important contribution for the Boltzmann equation and then you get that the covariance will satisfy the linearized okay so at the end of this part here what you can prove is that so the covariance can be rewritten say is equal up to all these remainders to so you have the expectation so you still have this this guy here which has dot change and then you have t type salon m zero of phi m okay and so of course there is a sum now the m here is constructed by this iteration so you know that it's it's sub exponential and I will add a zero here to mention that there is no recolision here and so what I say is that then you have the usual thing to compare the Boltzmann trajectories and the BBJKY trajectories but you see that the contribution of the super exponential m also in the Boltzmann expansion is small okay so that's that's close to so that tells you that you will satisfy the linearized Boltzmann equation maybe I should have said something which is a bit like magic that I it took me a lot of time to realize that actually the BBJKY so what I say that by this construction what you obtain is something like the BBJKY hierarchy okay that's the solution and one thing which is really I think it's magic in this BBJKY hierarchy is that if you start with an initial distribution which is just like a sum because you see that in this so here you have that now your initial distribution come from this this fluctuation field so you have a sum of h1 okay so now the distribution the initial distribution is something like m to the n so the projection on the the n marginal times the sum from i equals 1 to n of h1 of the i okay so you have to to understand the BBJKY hierarchy with initial data like this and then instead of having the nonlinear Boltzmann equation for this kind of initial data is the linearized Boltzmann equation and this is an exact another exact solution of the Boltzmann hierarchy is the our solution of this form where h is the solution of the linearized Boltzmann equation okay so that's kind of magic I think it's okay so that's so at this stage what you have proved is that the coherence is of the fluctuation field is the solution for very large time of the linearized Boltzmann equation and of course it's not enough to prove the convergence of the process so you need two other things you need some tightness so this I will not go on this because it's just very technical and I don't want to but okay this is something that you have to prove okay it's not very different but okay it's much more technical actually you see that tightness if you can prove it on a very small time then it's okay for a long time so it's not really a problem too so if you can prove the convergence of the process for small times then tightness will be the same for a long time so I will not comment on this here but the other thing that you need is that to prove that the field is gaussian the limiting field is gaussian and so you need to to prove that so you have Vick's rule so yesterday we had a whole lecture by Manfred on this Vick's rule so here you see that it's a bit different because in Manfred's lecture Vick's rule is what you assume on your random variable okay and then you use this Vick's rule to say that some terms are 0 and other terms are not 0 here that's not exactly the same you need to prove so what I need to prove is that I have this pairing rule when I compute the IP so the moment of the P but I have to prove that H1 that this guy what I would like to prove is that this guy will be up to some small remainders so some of over all possible pairings of the expectation so then you have the covariance so we have to put two of them here okay I will not write it say I1 G1 say I and G so you take all possible pairings and you take the product of all pairs of the expectation of this guy that's just Vick's rule so of course usually when you have at equilibrium you just compute directly this guy because every affliction field all the affliction field are taken at the same time so there is no reason why you should do something different okay but here of course you see that it's much more complicated so the way we will do that it's also by an iteration okay so here a pairing is obtained also by an iteration so that's the first ID essentially there would be three important IDs here okay so this means that so you have this complicated something here with small interval of size delta and intervals of size tau and now you will have this interval of size Tp minus Tp minus 1 okay so you have three iterations which are just that you have to make together so that's a bit like horrible but okay that's fine in principle the ID is not so complicated okay so for this to realize this pairing actually there is one thing which is really important is that we will just pull back this fluctuation structure as I explained for the covariance so I will start from this HP okay at time Tp and I just pull back this until time Tp minus 1 okay and then I'm happy because then I have at least two fluctuation fields at the same time okay so now if I have two fluctuation fields at the same time what I have to do is to just try to understand whether it's still a big fluctuation field or not okay so that's what I would like to do so I just want to explain one elementary step okay so this means that I have pulled back okay of the fluctuation structure between say time Tp minus 1 Tp on this okay and then at time Tp minus 1 I don't know okay maybe I should be like this I need to understand what was the structure now when I take the product okay and so here you see that I go back to a remark that I met at the very beginning when I defined this zeta M okay so the definition of fluctuation field or the next generalized fluctuation field you see that what's something which was really important is that all the indices were all different okay so that's the second idea here is that now when you have the product of two fluctuation fields what you would like to do is to decompose it as something like which is still like a ton of product okay plus something which is a contracted product and this contracted product will be exactly the expectation here the covariance okay so that's so the second idea is to decompose a product in say a tensor and contracted product and so I will just do the exercise on just one say the product of just two fluctuation field of size one okay and you can imagine that you can do the same okay so now what I have is something like this so I have one divided by mu the sum of H of Zi minus the expectation of H okay then I have factor mu and then I multiply this by one over mu the sum of G of Zi minus the expectation of okay so that's just the so now I just I don't precise the time because it can be at any time tp okay I just look at this this product here okay so here I should maybe call this and so here you see that so the term which come from this will be one over mu square but then you have two type of terms you have one term which is a diagonal term which will be the contracted product of H of Zi G of Zi okay so this term will be very special plus one over mu square then you have the sum from I1 I2 and now they are different of H of Zi A1 G I2 okay and then you have so here you see that you have minus the expectation but then when you will take the expectation of this you see that essentially most of the term will just if I take the expectation of this guy okay you see that you have term okay so maybe I should write them okay so you have minus the expectation of G plus some plus so this is plus then you get minus the expectation of H times so the the empirical measure G and minus the expected empirical measure H okay so what I say is that what is really important here is that you have this term and this term actually is exactly the covariance okay of course if you compute everything you get so you will define the contracted product as this part of the product and the terms of product as this part of the product but then you see that this one is very very good you are very happy with this one because it will be exactly of the form okay by definition a zeta 2 a fluctuation field for a function of 2 variable is exactly the difference between this guy here and the expectation of the product okay so the way I have to do that actually is that plus 2 the product of the expectation okay so this term I don't really care so this one will be the tons of product okay and what I say is that I have really at this very nice structure that is still so of course the observable is a bit more complicated but I still have this structure of tons of product okay so what you do is that you see that if you do this iteration here you see that each time you will so you will put back this guy okay and so here you see that you have two choices either you have the contracted product and then you are really happy because say you look at the scaling of this guy of course the mu here will kill just one mu here okay and so that's just like the so it's something of the order of 1 okay and so only the expectation will be important because then you have just fluctuation around this but say the main term the term of order of 1 when you multiply by mu here is just the expectation of this guy and so you get exactly the covariance here of the HP and HP minus 1 okay and then you have to just compute the rest but just decouple okay or you have this tons of products and then this means that essentially you have the same structure and you just can continue the iteration okay so that's really important that you can realize that you can compute this is an exercise actually that you can do say just forgetting about the exclusion here but this is a way that you can compute effectively Vick's rule for a product of fluctuation field okay but what is really important here is this decomposition in tons of contracted products okay so I have not so much time so so there is still one thing that I would like to explain so which is the third argument here because here I have I was a bit cheating I just would like to explain why and this is really important and this is actually connected to all this cluster expansion and so on so that's I will try to explain in the remaining time this last argument okay so if I go back to the iteration so then what I say is that I start from a time say TP here I just put back the observable then I just decompose in one contracted product and in the case of the contracted product then I just stop I just remove the expectation of this pair of this contracted product okay I remove the expectation just look at the rest of the fluctuation structure without this two field HP and HP minus one okay so that's fine and the other one then you see that I have a zeta 2 of something which is a tons of product okay so I have something like zeta 2 of HP times HP minus one okay so I would like to just pull back something like this and say that it's not very different from say a tons of product of this two so of this two fluctuation field okay so now what I would like to do is to see that I will keep this tons of product here so I would like to say that when I will pull back this new thing essentially I will keep the factorization okay so the next the last argument here is that when I use the pull back at leading order okay at leading order the pull back keeps say the factorized structure okay so that's actually you see that's something that you expect if you start with one particle here just put the tree like this you start with another particle here you just put another tree here and you see that you expect that more or less is true that it's the same to propagate back these two particles or to propagate back this one and propagate back this one okay so that's something that you expect okay so you would like to say that it's the same to pull back HP on the one end pull back HP minus one on the other end or to pull back the tons of product here okay so that's what you and actually if you assume that you have this then this iteration is okay okay you can do that you can just okay each time you pull back then you separate the tons of product and the quadratic product and then you go back and pull back everything and then you find this mixed rule okay so if this if actually the pullback was really keeping this tonalized structure then this would be the end okay now unfortunately it's not true that the tonalized structure is preserved so of course when you are just interested and that's the question of the propagation of chaos in the Boltzmann grad limit okay so you see that so that's the same as saying that at leading order the Boltzmann grad limit grad scaling propagates chaos okay so chaos is really this independence between the two particles so now I have Z1 and Z2 they are independent I just propagate them back and then everything is independent okay so if this was true then I would get the mixed rule without any additional work okay now it's not true so of course when you are just interested in the covariance you just propagate back one particle so you don't care about all this okay now if you have this moment of autopie then you need to understand this and actually the leading order is not enough because if you remember well just because of the scaling of the scaling of this fluctuation field you see that you have a big power of mu in front of all this okay so you cannot say that okay but the remainder is more so I don't care okay so the remainder is more but you still have to care because of course you have a very big power of mu and so it's not clear that this small correction this small fluctuation around around the chaotic thing will be still a remainder okay so this means that you have to understand the correlation between two trajectories like this okay so that's the the thing that I would like to do in the remaining time which is 5 minutes right oh 10 minutes perfect okay so so what about correlations so what I would like to know about correlations is typically their size and how they will change this tantra structure and will they really contribute in the limit of course I hope that it's not the case because I hope that Vick's rule is true in the limit but okay so I would like to understand both their size their structure and how they propagate okay so actually this correlation we have studied them in quite details in previous paper just for short time but here the method of analysis will be the same so actually we can say very precise things about these correlations and these cumulons because we can for short time we can even get a large deviation result so this means that not only at the level of fluctuation but really for correlation which are as small as you want we can say something okay so now let me just explain where this correlation come from so I will start with just two particles and I just change a bit the orientation of the picture just because it will be easier so now time was like this and now time will be like this okay and I just redo exactly the same picture so so the first reason why these two particles might be correlated is that when you just write just construct pseudo trajectory backwards then maybe at some point these two trajectories will cross each other so you will have a recollision okay so that's the first reason why you can have a correlation is that okay you have something like this and you have two particles and a third particle and then you have something like this and okay so of course the particles which are created in the tree here say that creation is independent of this and the creation in this tree is independent of this tree here but maybe because of dynamics at some point they will touch each other and then they will be scattered okay so that's the first reason why you can have correlation and then of course this is not true you see that the dynamics here is not the same as if you just look at these dynamics and just put them close to each other this correlation so there we only care really z1 and z2 collide no no any particle in the tree of z1 and any particle in the tree of z2 but why does it impair the independence of the two particles if they are children because I just I would like to write if you remember is that the phi like I construct from this so I just I just say that I pull back with the trajectory and I would like to say that it's the same to pull back separately the two trajectories so separately the two observables and so you see that the configuration here is not the same as the configuration of this one taken separately so in the end they all end up in the initial data or the arguments will be the children of all and so you see that it's not true that I can say that it's the same as looking these two things separately because the the configuration here will be different and that's it okay so I say okay that's the first obstacle but okay maybe I can okay so you see that when I say that I will just discard recollisions I will discard internal recollisions so internal so if I have two particles here which this recollision is not really important I can discard it okay and that's typically that when I discard it when I just say that I just consider the kappa to be equal to zero but these recollisions here I cannot discard them because they really change the structure of the whole thing so when I look at moments of higher order then I cannot discard all recollisions this external recollision this external recollision I think this is the terminology introduced by Céajot and Maillot so this external recollision I have to keep them really in mind because they introduce some correlation between these two things okay so because of this it's not true that the phi the pullback is just a product of the two pullbacks okay so now you can say okay but this still happens with probability which is small because so imagine that you have no creation then you see that if you have no creation you have just something like this and you see that you have to be in a very small subspace so what I can say is that in order that such recollision happens this introduce a very strong constraint on these two particles okay so this is localized in a very small set so the size of which is typically one of our new epsilon okay so that's and here you see that all this support condition of the phi and so on okay so this recollision is okay the first obstacle now you can say okay now let's just consider two trajectories which have no recollision is it true that they are independent and not of course if they do not have a collision then it tells you that they are not independent because you know that they do not have a collision right okay so here are two trajectories which have no collision they are not independent because you know that they have no collision so they cannot overlap okay so next you have a second obstacle so that's the first obstacle this recollision and now you have the second obstacle which tells you that it's not true that being having no recollision so this is not true that it's independent okay so if I would like to say that they are independent what I have to say is that okay if I have no overlap then it's like being independent minus having an overlap okay so now what I would like to write is that no recollision so it's like being independent because of course you know that then they evolve they evolve just independently okay I know that they have no recollision so I can write the dynamics here I can write the dynamics here but still this is constrained by the fact that at no time they should overlap so I can write something like this okay so now we have another contribution to the correlation of order two and actually it's really important because the covariance is exactly the sum of these two terms okay so now the second problem is this overlap so you write this you do your two trajectories independently of each other but then you just move the whole thing rigidly and then you have an overlap okay so this is really different because here you see that with this recollision you have some scattering while with the overlap so maybe you have something like this but here with the overlap you have no scattering this is just that you have two independent trajectories but you move them rigidly until the point where they will just overlap so you have no scattering the dynamics is not modified okay so that's the situation of overlap okay so that's the case of two two trajectories so that's completely actually described defects of factorization so what you can say is that when you write your pullback that's the sum of just pullback of independent variables where everything is factorized plus small remainders and these small remainders will be something which is of the order of one in L infinity but which is supported on a set which has size one over mu epsilon okay so the size in L1 of this guy both guys that's the same it's just a geometric condition it's not a problem of say the size of the set is just a matter of geometry and you don't care that whether you will be deflected or not okay so that's the case of two trajectories and then you have the case of many other trajectories okay so maybe I should say also that if you have one recollision or one overlap this is the leading order contribution here of course you can do a second recollision here but this one will not be important and actually it will not improve the estimate okay so once you have at least one recollision you really don't care that you have more than that okay so the other one can be dealt with exactly as the case where kappa is different from zero so as soon so here you see that you just draw a graph from this so here actually you have a graph if you have two things which are independent then you see that you have a graph which is not connected okay so now you see that with this recollision or with the overlap you have a graph which is connected okay and what I say that among all graphs which are connected only the minimally connected graphs are important okay so for the correlation true what you obtain is so they are represented by graphs which are connected either by a recollision or by an overlap okay and what I say is that among all these graphs only the minimally connected graphs are really say provide a leading order contribution okay so only minimally connected graphs contributes a leading order so it's not so clear to know exactly the contribution of say having one more recollision or two more recollisions so it's not clear that you gain something each time but what is clear is that you gain at least a little bit of something so that the contribution is say not really important and you can just discard them okay so you see that what is really important is the graphical representation of this and you can even simplify a bit by just replacing the rule tree here with just one point okay so the simplified the simplified representation in terms of graph would be to replace the rule graph here of one by just one point and then you need to have a connection here which would be either an overlap or a recollision okay so now you see that I just I want to compute you know a product of a lot of fluctuation fields I will not have just one or two starting point but P starting point okay so then I have to do the same okay so I have P starting point and I can just replace the starting point say that this guy will just represent the rule tree of one, two until P okay so now you replace a point by just the rule dynamics of the the rule collision trees of this guy okay and what I say is that the cluster the cumulant of order P okay so I can classify all cumulants okay so the cumulant of order P which is actually the coefficient when you when you expand the partition function the exponential moments at all the P then the generating series of order P so what you obtain is that you should have dynamical correlation between all these points okay so now I can just forget about the dynamics here and what I say is that I should have a connected graph so maybe something like this this okay and only minimally connected graph will be important and so you see that the size of the cumulant will be like one over mu epsilon to the P in L50 norm in L1 norm but of course then you have the combinatorics of all possible graphs which is not so good and so this will give you the factor of P and if you forget to symmetrize the whole thing then you will lose another factor of P okay and then it's not good okay so I will not say how you can say plug all this thing in the previous part because actually that's what you have to do on each small iteration so on each time delta you have that this complicated thing so you have really to look at the iteration of this so you see that because of this correlation you will have packets of particles which are growing and then you have to control all these clustering structures so this is one additional technical thing that you have to plug in the proof but say really at first say to have an intuition of the proof you should just imagine that the pullback keeps the center raster structure okay so that's another thing so what you have to do is see if you just go back to this big picture so maybe I will conclude with this so you have three different scales so you have the scale delta so a scale delta what you remove is just an internal recollection so or non-clustering so the one which create a loop so loop you just remove loops so that's can be loops like this or loops like then at time tau what you have to remove is both the superexponential and also all complicated clustering okay so at time tau you have to remove all big packets of particles and you just keep the standardized things so all clustering have to be removed at this time tau and then you have the the tp here minus one or tp where at this time what you are doing is just to separate the contracted products and the taux product and then throw away all the small remanders okay so here you have to just do this this okay contracted and as a product okay so if you just add all this argument just in the right order which is a bit a bit intricate okay then you end up with the fact that you have the VIX rule and that the covariance satisfies the linearized Boltzmann equation so that in the end the limiting fluctuation field is the solution of this fluctuating Boltzmann equation and this for very long times so very long being something like log log website or something like this okay thank you very much for your attention earlier once you were using this you have developed this time uniform estimates on the truncated correlation functions are they used still in this proof here or you completely managed to do without so you mean the first paper in L2 right yeah no we completely so it was very specific actually I think this method here is very robust but the one that we developed actually in 2D was not somehow it's I think it's a bit related to this but it was say somehow the canonical version of this and so in the canonical setting you see a lot of other fluctuation which are due just to the fact that you are not so you have all this a correlation coming from the fixed number of particles and so it was really important to control the partition function so to have a bound on the partition function which is not true in any higher dimension and so yeah we completely that's really a totally different method and I think actually that these papers can just go to the trash so I would on the technical side there is this simulation here which allows you to gain a factor going back to the very beginning of the lecture which means the u squared and the x squared I expected at some point to see which would deal with the plus and minus terms the gain and the loss and I didn't see actually we are not able to use the only way we are able to use the fact that there are plus and minus is the existence of an invert measure so I agree that it's really a pity that we cannot use better than this science but say at the moment we are really not able to see any minus sign so the only thing is that of course if you add only a plus sign you will not have any invert measure that's the only way we can today maybe maybe at some point we will understand how to use these constellations but at the moment we are not able to use them there is no way to symmetize the omega and the the problem is that actually everything is so you can symmetize but then you see that actually that's exactly what you do to get this cross section with the positive part actually you just change one of the omega in minus omega but then you see that you are not exactly at the same point and anyway everything is deterministic so you see that you have no say it's not clear how you can get the averaging so we just add an atom to use such a constellation on very very some more times of the order of delta so and that's another a little bit different approach so here we use all this dual-maliteration and sort of trajectories but at some point actually that's something that Thierry is trying to write right now is to use real trajectories and then on very small intervals it's possible to to use a bit the constellations but yeah for the moment it's really something that we don't understand so in order to stay reasonable on schedule I think the further questions can be answered in smaller groups during the coffee break which should last until maybe 11.35 or 11.40 to stay reasonable on time and we can sign the speaker again