 So, thanks a lot and thank you very much for the invitation to this meeting here. So as announced I would like to talk about some derivation of the Vlasov equation, which effectively describes the dynamics of many particles in the V-coupling limit. And to get to that goal I would like to focus today, as you can read in the abstract on different notions of convergence. So the idea is a little bit more to present the ideas behind the proof how these different notions of convergence could be used to derive this Vlasov equation and which statements hold true, which statements hold not true. In parts it's also technical that for some technical reasons it's good to look at the right notion of convergence, although other notions might be true also, but it might be technically hard to prove that. And to start with I would like to start with these different notions of convergence I would like to discuss today and then I would like to introduce this microscopic system and give you the respective statements on how you can prove that there is some effective evolution equation describing this system. So here the notions of convergence I would like to introduce and assume you have some space, some sample space omega and I would like to talk about kinds of different notions of convergence. Now you need not have a particular system in mind that's just some general definitions or repetitions which of course you had in your probability class but I think it's good to have them. At hand something I would like to call deterministic, if for each omega in my sample space one has to get used to use char again, right? For each omega in the sample space it holds that the limit n to infinity of x and omega is some x of omega, that's the deterministic convergence of what I would call deterministic in this case. Here you don't need to have to talk about probability measures and all that, you have some set, you have objects in the set and there you have this kind of convergence. So what I will, the next notion of convergence you all know from your probability class is almost show convergence. Of course this x is a random variable, this xn is a sequence of random variables, more details will come in this proof for the special setting I want to discuss. There's almost show convergence, you know that this is the statement that the limit n to infinity xn equals, oh I wanted to use the notion y for some reason, sorry for that, y is equal to 1. The third one I would like to discuss is convergence in probability, that's of course, you have the distance between xn and x, sorry y is larger than epsilon goes to zero as n tends to infinity now this is of course a real number, so you have this kind of convergence. I will need a little bit of space because I would like to say more about the convergence and probability later, so I leave some space here and the fourth notion is convergence in distribution. Well there's, I could say convergence in distribution for all kinds of weak notions of convergence and you know there is this equivalent statement to the traditional definition of convergence in distribution which says of course that the distribution functions converge on all points where the distribution function of the y is continuous, you can also have these v-convergence, so that's saying that the expected of f if you take the xn of fyn for any f's in some, I use the letter n for some nice functions, for example here you could use continuous in lip sheets or you can change that set, I will come to that later just to make the notions clear. Now in this talk I will not distinguish between really convergence of a sequence xn to point a y, I will also use the notion convergence in a little bit more dirty way that I put something like yn here, so from time to time in particular when I talk about convergence in probability I will have two different sequences which approach each other but how to generalize these statements, I think this is crystal clear so if I sometimes talk about convergence but mean just two different sequences which approach each other I think it's clear what I mean by that and that's also true for all these types of convergences. So I will come back to that and make things more precise, this is just recalling these different notions which will play an important role in this talk and as you know I went from the strongest to the weakest and now I would like to come to the system I want to discuss and the system I would like to discuss is n bodies which evolve with respect to Newtonian dynamics and I want to write to your Blasov equation. So I want to talk about the derivation of the Blasov equation from the microscopic picture so I will have a microscopic system, this microscopic system, can you read when I'm right here, can you see that on the camera, okay, so my microscopic systems will be n particle subject to Newtonian dynamics and I will use the following notation and let us, you don't see it, okay, I'll try to shift a bit, okay, so the letter I will use for the trajectory is x team, this will depend on n so the particle number of the system I will not always write all the indices often I will drop this index n just when it's important I will recall it so I use the letter q for the position and the letter p for the momentum of the particle now all masses in my system will be equal so I will say I said all masses equal to one so velocities and momenta are just the same thing so p is the velocity all momentum and q is the position and I will also use well I will use capital letters for the position of the n particles and I will use the letter q t1 q t2 and so on for the single entries p t1 and so on so small letters describing the position of one particle capital letters for the collection and of course you see each of these will be in R3 so my dimension will be 3 this can be easy easily generalized so all statements can be generalized to other dimensions when the dimensions get interesting I will say a few shubhurtz but just think of d equal to 3 so these q's and the pti live in R3 so this is a trajectory now defined on the n-body phase space and this is what I want to look at now I said it's the subject to Newtonian dynamics which means that qti dot is of course given by the velocity so the change in the position is given by the velocity of the Ith particle and now there's a force in the system and I assume there's a force in the system and I assume that there is pair interaction in the weak coupling situation so that means the derivative of the velocity is given by 1 divided by n this is the number of particles in the system and now I use the sum j not equal to i and I use some force qi minus qj of course always at time t now from the physical point of view you might say well this is questionable to assume such kind of a system because increasing the particle numbers it doesn't change the coupling constant of your force well these system can be argued for by just changing to different scales of the system so my scale will be that the volume so the intermediate distance so probably the scale will be such that but the volume of the gas is of order one so all length the length scale which means the intermediate distance between two arbitrary particles is of order one and the point is now as I said well this looks a little bit artificial to have this weak coupling but if you say okay in fact you have a system which is a large volume and you just use the unscaled situation with with a fixed coupling constant you can rescale it to a smaller volume and you get this kind of equations of motion or in other words well you take n to be large well the positions when I talk about the positions that might be the position of stars forming a galaxy and you could assume that the true galaxy growth in size when you put more stars into it and then if you rescale everything such that it has a fixed radius then you can arrive at a rescale system where the coupling constant now gets andy pen andy pandemic is not a surprise so there's good explanations that this is in fact very physical to put this weak coupling here for such a system well in my talk I will not distinguish between repulsive and attractive cases so you could think of stars forming a galaxy where the interaction is attractive you could think about a plasma where the electrons repel each other this will not make any change for today's talk and now I would like to write down the effective description of that which is the lassof equation so that's the microscopic description of my system and here if you look at the system and you do some yeah you risk the guessing how these guys here evolve you arrive at the following idea so this is the typical idea you get from from the mean field situation now I will assume some probability so later my omega will be face-based the initial position of my particles will be at random I will assume that they are IID the particles and if I assume a time equal to 0 the particles are IID with respect to some some row 0 so you assume that some row 0 is given that's some initial density now I assume I put my particles into the system IID with respect to this row 0 this density on face space so this is a density on face space so it goes from r6 to the non-negative reals and is of course the integral is equal to 1 that's a density and if you assume and put your particles into the system you know that the law of large numbers tells you that the empirical density of particles is more or less if you look at the empirical density so where the particles are setting in some yeah weak way looking at in some some rough manner it draws the picture of row 0 with a very very large probability so deviations from that is of course happens with small probabilities and then if you look at the interaction if you have say okay what's the interaction this particle here encounters well it gets this sum of all the interaction terms from the other particles now let's say this is particle number one so you have qt1 it interacts with all the other particles given the position of particle one the other others are all still IID independent from the first particle so there you can average this interaction you can replace the true value of that by the expectation value that's what the law of large numbers tells you so what you see is that the force is in good approximation f of let's say this is particle number one for example qt1 minus q look at time 0 is in good approximation if we assume that the system is IID by the convolution in the q-coordinate of the force with the density so with this I mean the following with this convolution this is defined in the following way so I integrate f at the position let's call that yeah that's just an integration variable let's call that y row times of the variables q 0 1 minus y comma p let's put that here x d3 y d3 x so you integrate out the velocity the variable describing the velocity so you just look at the the density projected to the position right so you integrate out all velocities and then you make the ordinary convolution that's not surprising because I assume that the interaction is independent of the velocity difference I assume that the velocity is irrelevant so I just look at the density profile in ordinary space and then I take the convolution to get this kind of behavior now this holds for t equal to 1 and of course the tricky part is to show that this here survives propagation that also after some time this approximation is valid and this is of course something one has to be estimated and as very often for such situations the error estimates are simple that the tricky part is the propagation of errors right so assuming that there is some initially it's iid to to prove the law of large numbers and to control this difference here is of course I would say it's textbook probability theory so to estimate this error here is easy to do by the propagation of errors which means if you have an error error term already and particles start to get correlated how this in fact are the particles that's typically the more tricky part of these proofs and here one has a hint that this could work which is the following right so here you have a pair interaction I have chosen this factor here such that the interaction term is of order one so the volume will be of order one in phase space so the typical velocity differences will be of order one the position differences are of order one everything is of order one this force term now is of order one let's assume there's no singularity for the moment and this force term is of order one so this is a leading order effect on the evolution of the system comes from this force and it's a pair interaction of force so one might think well then correlations are in some sense of leading order but we see this law of large number tells us that this pair interaction can in very good approximation be replaced by one particle term only so this is like an external field so the leading order of this interaction does not lead to correlation that's the thing right when you do law of large numbers you integrate out this one variable and you have an yeah it's like an external field so you assume that after some time it should still be iid so the correlations should be small and then the question is sufficiently small that you can control the propagation of errors and how these small correlations can be controlled and they do not blow up in a sense by infecting other particles now that's the situation and I would like now to talk at the same time about known results and connect that to this blackboard on the side where I talk about these different notations and I would like to start with some old results all these go back to no inside week on and have that's from the 70s there's a few more people to mention there's a very nice paper from Dubrush and I would like to to write him down I like the Dubrush in paper a lot about that also there's a contribution by have a spoon from the 80s and so on but these are the earliest result and these results they make the following assumption F is glow is globally lipchits so they have a global lipchits assumption on F and they get a deterministic result deterministic result in the following sense so as long as rose these density of my particles at time zero I call this the empirical density of my trajectory at zero I will not give the details I will say something about the details it's given by this row zero it follows that my empirical density at x t is roughly given by the Vlasov equation now what is the Vlasov equation so the Vlasov equation now it's the following situation now you take the density yes x t is the full position and velocity also only there is the full distribution function or only the spatial density the you mean the empirical row I would like to define it so here here here this x 0 is defined as the sum of the deltas so as a function of q and p is defined as the sum of the deltas of yeah the pair q p with the pair q t I and p t I and I sum over all I going from 1 to n and I want to normalize the density such that it is has norm 1 so this is the empirical density is what I would like to call empirical density and this row here this row t is a solution of the Vlasov equation so what how does the Vlasov equation look like so now so this row t solves the following equation so now if you look at the system which we have we have this initial density profile row 0 which was given my particles were IID and each particle is now more or less subject to this force which I called f star row and I assume that this holds also for later times now if you now go do the following if you follow the trajectory of one of these particles to describe how the density profile looks at a later time you have you will theorem also on the one body level that's because of the idea that my pair interaction now is effective effectively like an external force right so you can in good approximation use you will see a theorem just looking at the one body level which of course for interacting system is in general something which does not work but on the here in good approximation this should be true we just assume that it's true well the Vlasov equation for me now is just an answer so I will rate later prove that my true system really is described by Vlasov that comes later in the talk and then you can follow the trajectories and due to you will along the trajectories these density profile will just stay constant so the total time the derivative of this row if you follow the one of these trajectories if you follow the trajectories so q and p is just yeah one of these candidates is this is the unsets is equal to zero and that means of course that the partial time derivative of this row which will now be time dependent is equal yeah the derivative with respect to q times q dot well q dot is of course given by p plus the derivative with respect to q to p times p dot and p dot now is is replaced by this f star row qp so this way you arrive at this kind of lasov equation here and you see it's a non-linear equation and row row shows up quadratically here in this second summit and in this talk I will not say too much about the solution theory of this equation so there's existence global uniqueness and global existence even for the f being coulomb under yeah fair conditions on the initial state is well known already so I will just assume that there is some solutions and I will want to assume other science is I think you're right thanks a lot and I would just assume that there's some solutions exist so that I will assume even integrating in the p coordinate that this year stays bounded for all times well as I said if if your initial row has some compact support in the compact support in the velocity coordinate one can even show for coulomb that this can be propagated in time and you get solutions like that but I will not talk about existence and uniqueness of solutions I will just assume that here now this is a nice result and the question one has now is how to generalize this result and of course the problem with this result is the following that it assumes a global elliptics condition on the force term but the forces we are interested in are typically coulomb forces because yes what do you mean by this currently equal so you mean some limit and I have this is I would like to okay come to that okay good good that you're asking so now the question is in which sense are these two guys close to each other it's in some limit and there's of course different notions of limits you can use here what you can do is you can take the Wasserstein distance for example between this empirical density which of course defines a Radon measure so you can take the some notion of distance between two measures here and show that after some time t you still have closeness in that notion of distance I wouldn't didn't want to go too much into details in these notions of distances because I would like to focus on a few things which are a little bit simple I'm not sure if everyone in the audience is familiar with these Wasserstein distances that would might require a little bit longer but in that sense it's I assume that the limit here in the Wasserstein distance is row zero and here then I show that it's roti good point okay so this is these are the deterministic results and the drawback here is that I'm not so much interested in globally elliptics forces the most interesting cases are Coulomb both attractive and repulsive if you think about yeah gravitation or electro electric interaction repulsion then it's a Coulomb system and the point now is the big goal is to take f Coulomb this is until today an open problem so now deriving these mean field limits for these classical system is something hard to do and a little bit more tricky than in the quantum systems in the following sense in the quantum system you have the Laplacian which smoothens out the system a little bit so that makes it a little bit easier of course then you can ask more difficult questions on the system so that's not saying that mean field limits on quantum system is per se easier but if you look at the same system for example Coulomb interaction the v-coupling limit I could give a rigorous talk in one hour give the full rigorous proof showing that this works for the Coulomb case and so that's quite relatively easy to do and here it's an open problem right so nobody still could solve this for Coulomb systems with some exception of course you could ask for example assume that it's just repulsive and you have some monokinetic data monokinetic means that the velocity of a particle is a function of its position and that yeah of course reduces collisions of particles right if particles collide it means they are at the same position with different velocities and things like that so there are results in this direction there's a paper I think is which is very nice by Sylvester for tea but also from others and who do Coulomb interaction with some other restrictions I would like still like to stay in the system where the volume of the gas is fixed and it's a general situation for this initial state and I would like to extend these results to Coulomb system and as I said I'm not able to do that nobody's able to do that at the moment so what you could do is you could say okay let's take a few steps which bring us closer to this Coulomb system so you can introduce either singularities which are a little bit weaker than for the Coulomb so where the force is not like one over q squared but it's one over q two some power between let's say one and two or so and the other thing is would be to do a cutoff so we do both things but for this talk I would like to stay with a cutoff and what I want to do is here since this is still an open problem I would like to introduce a cutoff at a certain length scale and so the system which I want to look like is fn which is defined in the following way so it's just the old f so fn of q is f of q if q is larger or equal to some n to the minus delta and of course the role of delta will be very important in this talk you can have different choices and I want to get with delta as large as possible to have a relatively small cutoff and I would like to assume that it's kind of smooth if your q is larger is smaller equal to that which means that I have some kind of nice behavior smooth here means I need some differentiability of the force of course since it's it's a vector that means that at zero the force will be zero right but I will go down from from from this boundary to zero in a smooth way I think the details are not so important what you could do is you could also make a convolution of the force term with a Gaussian which has a standard deviation given by n to the minus delta the results would be the same so this is the force what I want to look at and now the first thing is the following question now I could say okay let me try to prove the theorem I assume that there is convergence in the Wasserstein distance at t equal to zero and I just assume such a force term with a sum delta and I try to prove that there is convergence at a later time so I will stay at the part one right at the deterministic level well this will fail there's a guarantee that this will fail because I have an example there's a simple counter example which shows that this statement doesn't work anymore if you have these kind of mild singular behavior well it's a singular behavior of course now as q goes to zero while n goes to infinity I still see the singularity so it's of course a milder singularity as yeah true Coulomb but still it has some singular behavior now in in this n to infinity limit and the example is very simple so assume you have a kind of this is now your cloud of particles and assume you form clusters at t equal to zero so here I draw the picture of the initial trajectory so x at point zero and all these crosses are particles and I assume that I build clusters and I assume that the number of particles in that clusters so the number of clusters I assume is one divided by epsilon and the number of particles per cluster is then of course given by epsilon times n and I will choose epsilon as a parameter and I will look at the following limit I want to take the limit n to infinity first and then epsilon to zero that's what I want to look at and the point is well if I can the question is can I now prove that for this kind of sequence of initial conditions I get some kind of closeness also as time propagates and to make the picture more clear it's good to assume that the system is repulsive now I have Coulomb with cutoff which means the potential energy coming from your partners I look at the potential energy only from the partners in the cluster but because that's the leading order the potential energy for each particles is how does it look like well I have this Coulomb with the cutoff so the potential energy is like one over q with the n minus delta cutoff so the potential energy term coming from one part is n to the delta and I must not forget this one over n coupling in front so sorry I used small n's before don't know why I changed to capital n's here so the potential energy is that well this is the one over n coupling I have this comes from Coulomb with n to the minus delta cutoff that's the potential energy coming from one other particles now in the cluster there's n particles times epsilon I interact with now if you look at that if you take the n to infinity limit first and then epsilon to zero this here will go to infinity if n tends to infinity for any epsilon so that means the potential energy of a particle let's say this particle here the potential energy coming from his friends in his cluster goes to infinity and now you know what happens after a very short time this potential energy will be transferred to kinetic energy and these clusters they all will explode and as n to infinity this explosion will be faster and faster the outgoing velocities will go to infinity so you cannot have convergence to a nice solution which is up a priori given of some nice PDE right so they cannot approximate this Vlasov equation and you see that there are initial conditions which have the following property now why do they take these limits of course taking epsilon to zero it means that the number of clusters is still going to infinity right which means you still have an infinite number of different clusters which enable you to draw the picture of this initial density so still you can have in some weak sense convergence to the row zero right if you put your many clusters one over epsilon in the right position it's close in terms of epsilon only of course not as in terms of n but in terms of epsilon you have closeness of the empirical density at time equals zero to some row zero if you put the clusters in the right positions so the condition is fulfilled but you see in the n to infinity you will have these explosions so the conclusion is not fulfilled so the statement is wrong right so here you see that everything gets wrong and the question is what can we do a question yes please and so those clusters you had and I mean on the initial condition you kind of or okay I'm not sure I got this right but you wanted to start with iid particles but yeah okay yes yes yes yes and how does that relate to the cluster okay very good question is that precisely this is what I wanted to say now now at the moment we're still in step number one okay so there's not really a probability measure that's just a sample space and we want the statement is whenever the right left side is true the right side is true without talking about probabilities or iid now I say okay assume that the left is true I showed the right is true then something like this is of course a valid counter example so if you want to have an statement which is as strong as these statements which are deterministic which hold for any omega you're lost and what you say is completely right so the goal is now to say okay let's look at step two or three let's introduce the probability measure and now of course these clusters are rare the probability that you form these clusters if the system is iid is ridiculously small of course and therefore you can exclude that if you prove something like statement two or three so my message here is well one is the strongest global ellipsoid forces it's done if you want to generalize you have to do some probabilistic approach and now what you do is the following of course I want to skip part number two for the following reason you could say okay let's talk about almost show convergence first but the point is now if you do what I will do is convergence in probability and I want to do error estimates on the system and you know that if you have good error estimates you can prove with Borel Cantelli or so you can prove almost show convergence well it's not an iid situation so here it's not if and only if but I think you all agree that talking about almost show convergence does not bring any new ingredients to these ideas or is not the right thing to look at so therefore I would like to talk about convergence in our probability and I left some space here because what I would like to remark here is if you talk about convergence in probability and that's not so clear if you look at the textbook version of this convergence is that you have two parameters which you can tune under this notion and the two parameters you can tune is the following the first thing is you can have different notions of distance so if I talk about convergence in probability you still have the d and we have different notions of distance and it's important to also discuss the different notions of distance well in the standard textbook examples for these situations you have random variables which map to some kind of yeah let's say a vector space for example right so this xn's go to some let me put that here on the side maybe so you have random variables in the textbook version omega two let's say some r let's say a and then if you say occur all my notions of distance are distances that come from some kind of norm you have the equivalence of all norms on that r to the a space so all these notions they don't make any change if you talk about these statements of convergence and that's why there's no some not in the textbook version there's no no fuss about the different notions of distance like in your analysis to class there's no notion of different kinds of yeah notions of continuity right because as long as you have equivalence of norms everything is fine and you don't you can just replace these norms but here now the situation is different in our case we have a situation where we have these trajectories and each trajectory is a trajectory on the n-body phase space so here this depends on n and now of course you can't use any equivalence of norm it will make a difference if your notion of distance between xn and something which i will talk about next is in an infinity or one norm that will make a difference another thing which is important to notice also in this notion of convergence and probability another thing you have to tune is is the scaling of the situation well mathematically technically speaking of course this could be understood also as a notion of distance but i would would like to make this distinction to make this clear now the point is the following with the different notions of distance i mean l infinity or l one or something like that but then the question is do i compare these difference between xn and something else on which scale on the scale of the full gas on the scale of the typical distance to the nearest neighbor or even on a smaller scale so you have different scales where you can compare that can compare that and this is of course something which is important to discuss which means what i will do is i will choose the epsilon to be independent because that sets the scale of the system and what i want to do is the following for this talk it's very nice that from their technical reason it's very helpful to put the same scale as the cutoff parameter that's not so surprising if you dig a little bit deeper and makes the system nice for the following reason if i compare closeness of something on a scale which is of the order of the cutoff parameter that makes sense or in other words if my notion of distance is is finer than that the result would turn to be not very interesting right because if if i changed or artificially changed my potential on my force on a certain scale i think any results which go below that scale are not so very interesting for my taste but that's just a side remark the point is for technical reason it turns out that choosing this scale and that scale to be the same is very helpful in the proof and now at the moment we are still in trouble with convergence and probability that that's what peter is planning to do but for what object now what i will do is i will introduce a system which i call x bar and this is now some system which is again an n particle system and i will use just a letter sorry depends on t and on n i will just put a bar on every object as i defined it for the trajectories which i have and now the only difference is the following so this is again a newtonian system so this q bar dot is given by p bar dot but if i look at the velocity of the particles so the force term the force term is now not given by the the fourth of this this this n body interaction now it's given by the mean field force and of course the position of the particle now remember this is the guy we've guessed to be the right description of the clouds so now i just follow so to say the characteristics of this velocity equation here and this defines my trajectories and i assume that x bar at time equal to zero is the same as x at zero so i start precisely with the same initial conditions and now this here is the new auxiliary system now we call i assume that rho is a nice solution which is given so rho t is given and given rho t i know now solve these equations of motion i want to make this statement because sometimes one has the impression that just when only when you know q and p you know what rho is now i just solve the velocity equation and make the following definition and now what i want to prove is the following or what we could prove is the following used delta something which is smaller than one third and we could prove the following theorem then we get this almost show convergence and what we can prove is the following what notions of distance to be used well i always use your notion of distance which come from a norm so i look at the true system and compared now with the bar system and now i use the infinity norm so the maximum norm on this yeah r 6 to r to the 6 n space and i show that this year is the probability that this is larger here i have some time dependent constant which i can talk about later if you wish some time dependent constant but the interesting thing here is now the scales so here i said epsilon which is independent so i have thank you and to the minus delta and i can show that this year is bounded by c gamma into the minus gamma that with this here i means the following for any gamma in r there exists a c gamma such that and so on so this is what i mean by c smaller c gamma into the minus gamma so now you see we have this clustering is now excluded because the probability that you have clusters is of course exponentially small if you look at this this kind of clusters that i introduced here and of course then for the initial condition you can assume that there's no clustering and this is now the result we have and i think i have how much more time do i have when did i start i don't remember more or less 10 minutes okay good i would like to say a few words on the proof before go to the other kinds of notions other notions of convergence so now you see the scale is interesting and it's more or less the interparticle distance right the sorry the distance to the nearest neighbor so the average distance between two particles is of course of order one the yeah typical distance of a particle to its nearest neighbor is like into the minus one third in this situation right we have three dimensions density equal to one so we're close to that distance already with a little bit of more effort we can go below that but this is something i will discuss later and now i would like to show you how this thing is proven because i referred to this clustering situation and i want to show you how this is excluded now you might say okay well initially there's no clusters i agree here you can do it's everything is iid peter can do law of large numbers initially there's no clusters but for the true system which is very complicated how can he show that no clusters form at a later time and this is much easier than one might think and the trick is the following and i would like to show you the trick how we can exclude clustering at a later time without really doing well the first idea one might have is that you say okay i collect some initial conditions and i see so i propagate my clusters backwards and talk about the initial conditions which lead to cluster at later times and then look at the probabilities this is what other researchers have done but this is very tricky and technical to do what i will do is something very different i will not do any backwards propagation of the clusters i will do the following trick i will just show that for each time team clustering is rare without naming the initial conditions which lead to clusters and this is much simpler than one might think and i do it in the following way so we define a random variable which i like to call j and this is the random variable which looks in the following way i take the minimum of this distance xn minus x bar n times n to the delta and one so this is now a nice random variable it gives me this distance times n to the delta so this distance on on the right scale you might say so i blow it up so it's of order one on the respective scale and i cut it off at one once this distance is large and now what i do is i prove the following lemma i want to prove that there's a groenwald kind of estimate for the expectation value of j so i prove that the dt e of j is bounded by some constant times the expectation value plus i call it little o of n again sorry i don't know why i use capital n's all the time so all n's are small sorry for that there's no capital n today okay here the derivative i actually mean some one-sided derivative so you know that these guys here are continuous so everything is continuous and therefore to get you want to use then once you've proven this you use groenwald to estimate that this here will be control be controllable and for groenwald it's enough to have a one-sided derivative as long as your function is continuous um that's what what what we do so the goal is to prove this kind of lemma here now i guess everyone is familiar with groenwald now having this uh this uh proven this lemma the trick to to control the expectation value of j is rather simple because you just compare this equation to to a function f where f dot is just given by c times f plus this guy this year you can easily solve by separation of variables so you get a good control of f and then it's also not difficult to show that f always gives an upper bound for that because these derivatives will be bounded by the derivatives here which means with this lemma you can show that there's an exponential growth in time of the error but it will not grow in n right so here this is now the trick one does and now the point is with groenwald you get a good estimate on j now if you look at this if you have an estimate on j you of course get an estimate on the probability and this is the following thing i would like to mention here the probability x n minus x n bar being larger equal to n to the delta is bounded by this expectation value why is that the case well the probability if this event holds which we have here then that j is equal to one right so if this is larger or equal then n to the delta this is larger than one so my result for j is equal to one now the probability that your random variable equals one is of course a low bound for the expectation value for this random variable as long as the random variable itself is not negative and that's the case here right so the expectation value is this probability times one plus other terms and therefore you get this estimate and this is what we wanted to prove now in the theorem i gave a little bit more details about the error estimates here i just omit these details so my little o of one if you tune it the right way is can be made more explicit but here for the moment i think for the general idea this here is of course nice i wanted to tell you by this now excludes clustering now if you look at the x want to prove this theorem you look at the time derivative of this j now if this here is equal to one the time derivative of j the upper bound for the time derivative is trivial you have reached the maximum you can't grow any further so as long as this distance here is larger than one or this object is larger than one everything is trivial so all you have to estimate is the cases where this distance here is smaller than n to the minus delta well let's assume n to the minus delta to be the interparticle distance and now you can show that this excludes clustering for all times because well we only need to control cases where our x tn minus x t bar n little infinity is bounded by n to the minus delta now x bar n is iid right the bar system still converts yeah still keeps independence everything is iid now here you can't form any clusters so and if i have particles they are iid and for each of the particles it's the infinity norm i can move that one particle just by n to the one third it's clear that you cannot build a cluster situation like that so if you put your particles iid in the gas and every bound can move as far as the nearest neighbor namely like n to the minus delta so this here is of course the bar particles this is the x particles so that means you cannot build any clusters and there you see that you excluded this clustering in a very simple way and then of course the proof goes on and now you finish proving the lemma and this is not what i want to do now you might ask the question okay now it's this delta is smaller than one third it would be nice to go to the interparticle distance or below that right and now we come to the next step now we have proven some convergences in probability with these yeah choosing a scale and now you say you would like to go further you want to improve the cutoff make a smaller cutoff and also use improve the scale where you're working and there you easily see that things fail if you make this scale smaller then you can show so for the better cutoff that this theorem again will not hold true because you will have some kind of your collision-like events which from time to time give you deviations which are too large that's not very surprising now you might say okay there's two things one can do still one can use part three and tune with the d but interestingly making this yeah convergence stronger which means a finer notion of distance is helpful for the proof that's a little bit surprising but if you look at the fact that you want to do a krönval lemma and your notion of distance make everything finer than this happens on both sides right so to control something fine with something finer is something which might have advantages so tuning with the d it's interesting to go to another notion of distance for example l one distance instead of l infinity is not very helpful now the trick is to go to convergence and probability and i don't have so much time but i would like to say a few words here now convergence and probability from the conceptual point of view the main distance to uh convergence in distribution i would like to explain in the following way that if you have different samples things might be very different if you have convergence in in in distribution let me give an example think of a glass table and you throw a dice on this glass table one person looks at the dice from above one from below right if you think about if you have the same result of the experiment they always see something very different but the distribution the probability to see a six is the same for both people right and this helps a lot so that means the following now if i have omegas i don't care if for the same omega my x x and my x bar tell completely different stories i just want to say what is the probability of certain outcomes because my initial state is at random anyways and now if you go to convergence and probability and use that as notion you don't have to follow all the particles in detail you can say okay if i have some event which is spared some collision events i can propagate both backwards both trajectories i start with different omegas but as long as the probability for those different omegas is more or less the same or the probability density in their area is more the same i get some convergence in distribution and now the trick is and this is some outlook i would like to give to go further now if you have a more singular interaction well the singular part of the interaction is of course a little bit collision like now if particles collide it can happen you have the true evolution and the collision looks in a certain way and you have do i still have some space here maybe so particles collide and this might be the true evolution and they collide in a certain way and then after the collision they fly apart now this x bar evolution might lead to small changes in the effective description but small changes when you have a collision might lead to very different behavior here right and you might then have a strong deviation from the x and the x bar in the future but if you say i'm not care so much about these notions of uh yeah this probability you can say this convergence in probability if conversions in distribution is sufficient for you you just could compare start so to say with the same at this time make a backwards propagation and say okay with if i have different omegas so you have an omega and an omega bar for the initial condition after the collision they lead to more or less the same trajectory of course you started at different points but still if they are close you get some kind of convergence in distribution you have different omegas which tell the same story but these different omegas have the same probability now we have some results with the phd student of mine already which i hope we can publish in sometimes some partial results this is an ongoing project and now the idea is to do that and you see here a notion like the one i gave in the theorem or also in the lemma is if you have collision like also my collisions can fail because it might be that the true trajectory and the mean free trajectory are close to each other but if this interaction is too singular that leads to a large deviation so you will not be able to prove this grünwald type lemma here and then you have a problem and in fact you can show that single particles deviate from each other so the notion in the theorem would in fact there would be a counter example that it holds for more singular interaction now this is the idea now for the convergence in distribution naming okay now we change the omega and only look at probabilities of certain events and this is helpful to go to more singular cases and here i would like to stop i think i'm a little bit over time anyway any question do you expect the results to be wrong even in the weak sense for very singular interactions that's a good question i guess in the weak sense i guess it will also hold for the Coulomb case but if you do the the theorem i had if you copy this theorem where is it where's my theorem anybody sees the theorem here's the theorem if you use this notion of distance in that sense then you can show that this here the right side will not go to zero which is not so surprising i'm not saying it's a surprise but just wanted to make the statement of course if you now want to improve look at more singular interactions then this will fail at some time so you have to do something else and the something else is to look at another notion of convergence actually it has to be a notion of convergence which is consistent with the notion of weak solution that we have for the world of equation i think so yeah and i think that say for weak solution of the world of equation you cannot control this kind of problem this i don't get really what you're saying the point is if you have delta smaller than one third this this is a proven theorem here and then as a corollary of course the point is i make statements now not between xn and the solution of the vlasov equation i make statements comparing two trajectories and now i understand what you're saying okay good point okay here on this level i compare my trajectory with the with the with the auxiliary trajectories given by the vlasov and then of course at the end i want to compare my xn not with another trajectory but compare somehow with the vlasov equation that notion of distance will then be different of course right because here doesn't make make sense to to introduce that and the distance between xn and rho in some wasterstein distance will for sure be worse than that right it will inherit more or less this notion of distance that's the best you can hope for right in some sense right that the let's say on on that scale everything on that scale everything can be true right and that's of course then limits of course the result on closeness of xn to the solution of the vlasov if you do it like i explained here if you do some wasterstein distance between the densities right there i agree yeah but here the strategy is to look at the trajectories first and that seems to be a little bit simpler and give stronger results to forget about the density and the advantage one has here when you do the grünwald kind of estimate that you have all the particles still under control and you can name the position of all particles in this in a very fine way little l infinity is very sensible of course right in a very sensible way you control the full trajectory and that helps to look at the next step so to say so of course in in in what you presented yesterday you you did it stepwise what replaces the going stepwise here is this grünwald kind estimate right so i have a very good control and on the system at time t and that allows me to come to get a good control on the derivative so what's happened in in the next showtime interval and therefore this this looking at the trajectories is here kind of helpful i think yeah it might be it might be very naive and you're you're writing your newtonian equations with with a force directly do you gain anything by looking at apps which have potential or is this assumed anyway well it's it's it's not really written as an assumption but all the forces we have in mind are some forces that can be written down as the convolution of of a Gaussian more or less with with Coulomb and then of course you have potential i've never really thought about that but i don't see any point where it helps that we assume that there's a potential and everything we have in mind is well the forces are quite explicit so we said it's it's Coulomb and inside it's kind of something smooth and if you replace the something smooth we don't care anyway what it does so if you replace it by something which guarantees there's a potential it would not help anyway yeah i don't see any point where this could be helpful okay so in this in this last case you described do you also use some kind of process like this x and bar like maybe with some couple to some stochastic jump yeah it's exactly exactly more or less that's the idea right yes if some partial results in that direction so this is more or less work in progress and that's and before in the estimate you if i understand the argument correctly you sort of prove that there is not a single cluster right so it's not really proving there's no i would name it differently so i control the probability for clusters at later times without having to propagate them backwards right in some sense it's well this cut-off at one should remind one at something like a stopping time or so maybe right so if if if things get too bad it's game over and then you count this as one and then you collect everything bad you've counted on the run so this is how you could also understand this cut-off and as long as everything is well behaved you do your estimates well behaved automatically means according to this definition no clustering at time t and as soon as particles start to cluster so to say you stop you say okay this i count now as as the value one and for all times this you can could also take the supremum over times if you like but that's that's that's more or less the idea and then so to say you collect all these bad events on the run and when you say no clustering that means really not a single one it's not that you control the number of clusters well not really as i said you know we distinguish between the error terms and the propagation of errors and for the propagation of errors it will be helpful that you assume you know it's it's precisely as it's written there of course there could be clustering but if you have clustering e is one already so for the grün well it's it's it's not so really bad and assuming there's no clustering you do your controls how that propagates the errors that's nice and then you summarize that's how it works so i'm not really tricking anywhere that i make some assumption which is not there now it's just that in the clustering situation estimates are trivial that's that's how it does and and you know there's no free lunch but that's rather cheap i would say and the the yeah the trick is that it's acts like this kind of stopping time that on on on the run you collect these bad events and control their probability that's the point we only collect the probability of those events you never propagate them back to time equal to zero that's of course very helpful here okay so if there is another question maybe we should stop we are already a bit over time so we will resume in uh in uh now uh 15 minutes or so so let's thank pizza again welcome