 Počekaj, počkaj, da se svoj organizator včiči... Zelo je mi jezna, da sem begino, zato vsega, za koridnjev. Zelo je se prav, da se na DSS, vzelo, da oskard Lalforda je tudi, da je zelo vzelo, ko se sezatave. Zelo je zelo dejaj, da so tudi tudi. Zdaj imam tudi tudi tukaj o VGK, vsega prošljena za tukaj. V VGK je vsega vsega vsega. Vsega pača je 45. 54. Zdaj je do Batangar, Groz, Kruk. In, razmah, ozbite, ta rečenja izgleda gledne, in genetizovana teoria. Malo sva, da je to, da je to, da je, da je, da je, da je, da je, da je, danes, način z njah, sigovalo, da je, da je, da je, da je, da je, da je, da je, da si dojelimo, da je, da je, da je, da je, da je, da je, ki je veliko nekaj nezajte, na primer stranček, in nisu, da ne znam, da postoje. Vse zelo da bi bilo. Vse je kinezicija, ki je moj. Tako je, da je vse drža počet na vse, da je vse je vse, da je vse, kaj je počet na odpravilje. So lambda is a function of the spatial density, rho is the spatial density. So mf is the Maxwellian constructed with mean velocity and temperature constructed with f itself. So this is as the basic formulas. I mean, I don't like this definition of t, t is something 1 over 3. I mean, we are in three dimensions of the fluctuation field square of the difference of between v and u, which is the mean velocity. The u is the mean velocity, this is the mean momentum. So given f, you can construct such a Maxwellian and then the kinetic equation is this prefactor e r lambda and this is the Maxwellian minus f. So we shall assume that everything is in the dimension of torus, x and v as usual is in rd. OK, so this is the equation, the BGK equation. So it describes the dynamics of a target particle, given particle in a gas in which at some, I mean, if you want, you can interpret this as a stochastic, a nonlinear stochastic process. You know f and this is a jump process, a single particle, this is the law of a single particle, which jump at an exponential time modulated by this lambda and the outgoing state is a Maxwellian with special density rho given by f itself and the distribution in velocity given by the Maxwellian. So it's an instantaneous thermalization of the particle. Of course it's a nonlinear process because in order to know what is the Maxwellian you have to know the solution at that moment. So if you want, you can represent this stochastic process, a jump process in the making sense, in the sense you have differential equation, you solve the differential equation and you interpret the solution as the law of a process. OK, the reason why BGK introduces this equation is practical, they are physicists and practical, in order to simplify the computations when you are in a sort of intermediate regime between the hydrodynamical limit and the kinetic regime. So in this paper what is interesting is that the microscopic starting point was the Boltzmann equation itself. So no Newtonian particle. Here everything is stochastic. And the argument of BGK is more or less the following. So you have the Boltzmann equation with one over epsilon in front of the collision operator, whatever is the collision operator in this case. And so epsilon is very small. So you represent the solution in terms of a trotter formula by introducing the free stream operator S0, which is this one. And you solve the homogeneous Boltzmann equation with this epsilon in front with just a scale of time. So if you fix tau, which is the parameter of tau is defined by the length of the interval in which you have the free stream, which is also very small. Well, since in presence of this epsilon, if epsilon is much smaller than tau, this is the homogeneous Boltzmann equation. So you know that at the end of the interval you are almost reached the Maxwellian, which is the equilibrium given by the parameter of f itself. OK, so in set of using this trotter product formula, you can introduce, because of this well-known asymptotic behavior, this other trotter formula in which instead of the evolution of the homogeneous part of the Boltzmann equation, you replace this instantaneous transition P and P is a transition probability defined in the following way with probability tau lambda, which is assumed as smaller than 1, you have a jump to the Maxwellian given by the parameter of the previous step. And with probability 1 minus tau lambda, nothing happens. And this nothing happens is because you want to have a free stream of finite free stream. If you do so, you plug this in the trotter formula and you add and subtract 1, this is completely standard, you rewrite f at the time and t in this way, you iterate and you find this formula, which is nothing else than the Duhamel formula of some equation. If you pass to the continuous version of this, as you compute this and you find that 1 over tau times P minus 1 is nothing else than lambda mf minus 1. And so if you pass to the continuous version of this, you arrive to this formula, which is nothing else than the Duhamel formula for the BGK equation. I'm cheating in all direction. This is not the argument of the BGK paper, but they say in words something like this. And the justification is that this projection is less expensive from the numerical point of view than to compute infinitely many collisions which doesn't play any real role when the Miffy path is very small. This is what I have understood. It's a very nice paper. The first three, four pages are very well written, very nice. OK, so, you know, the Balsamana equation is obtained from Hamiltonian dynamics under very precise scaling law. This apparently follows some heuristic arguments. So, what we decided, this is a research in collaboration with Paolo Butta from Rome and Maxime Ore from Marseille. And we started to think about how to define a suitable particle system, stochastic particle system which mimic the BGK in the same spirit than the direct simulation Monte Carlo method, the bird scheme is related to the Balsamana equation. Now, bird scheme is the continuous PD version of the bird scheme and then the cat's model, the well-known cat's model in homogeneous part. So, you take the cat's model, you define, you say that to particle interact whether they are some distance and go on. So, non-homogeneous cat's model is nothing else than the direct simulation Monte Carlo method. And actually, you can prove under certain circumstances which are not very physical, but anyway, you can prove the convergence of this stochastic particle system to Balsamana equation. Actually, not really the Balsamana equation because if you don't scale properly, you get what is called the Povzen equation. It's a sort of regularization of the Balsamana equation in which the interaction is normal. But who cares at this stage from numerical point of view, strictly local interaction doesn't play any role. So, we pose the problem of defining suitable stochastic particle system related to the BGK and try to prove a convergence result. This is the argument on my talk of today, but I'm a little bit would be more ambitious, but we are not able to produce much more than this. So, let's me do what is the situation of, as regards the existence theory for the BGK. Now, BGK is much simpler than the Balsamana equation. Nevertheless, the non-linearity is terrible. It's not the bilinear, because the non-linearity enters through the Maxwellian. And as far as I know is in next instance a constructive, I mean, there are some result in per term, essentially using the dipernalial approach, so renormalize solution, in factness argument solution without uniqueness, but there is a result of, per term, myself of 93 in which you have an existence in uniqueness result in L1, so weighted L1 spaces. I'm not aware of other result, because in the original BGK equation this lambda of rho is rho. The reason is that when the density is very high, the transition to equilibrium is much more favorable with respect to the case when it is small, so physically speaking you should have a dependence on rho here. I'm talking in a situation if you want, I'm physical in which lambda rho is 1 where it works the result of per term myself, but you can take sort of compromise, you can take lambda depending on rho, but bounded. So at some point you have a sort of saturation. If it's a function like increasing linearly up to some point and then to go constant is something which is more physical than to take lambda equal to 1 and as regards the existence in uniqueness results you are in a very good shape. So this is something that can be done. So we have to take for the moment I take lambda equal to 1 so this is the case in which our result existence result works well. In view of the introduction of the particle system I introduce a regularization of the BGK equation in the sense that BGK the fields are strict in local and to compute the temperature density, temperature and mean velocity I have to know exactly F computed in the point X. When you have a particle this doesn't make sense you have to smear around the point. So it's convenient to introduce a sort of regularized version of the BGK equation which is this one so here you have the stream part and here you have the minus g the loss part and the jump in the Maxwellian is the jump in a sort of regularized Maxwellian defined in the following way the Maxwellian is constructed with this field UG phi TG phi which are defined in the following way first you regularize the density rho of g is the density of the distribution g through a sort of smear function phi and then you construct the momentum and the temperature field according to this formula so you smear the if you compute this quantity in X you smear around X right? Ok, so if you do this the reason is that if I consider an n particle system I have to define what are the empirical fields and so I do in the same way I consider the d-dimensional torus Zn is a collection of position and velocities this is the notation and so you define the empirical hydrodynamical fields rho n phi Un phi and Tn phi according to this formula for instance you can think that phi is the characteristic function of a ball of a given radius so here rho phi n of X is the fraction of particle falling in that ball around X rho phi n U phi n is the momentum of this set of particle around a given particle X and for the temperature you have exactly the same formula so since all this quantity depends on the configuration Zn I write symbolically the index n when you see the index n this refers to empirical quantities Empirical quantities are quantity constructed through the in explicit configuration what do you realize? then you define a stochastic process through a generator which is this one n of G's test function and now what you have is the operator of the stream here is the lost part of this process and game party represented by the following so you take the configuration Zn on which you are doing the computation you remove the position and velocity of the particle i so what is this configuration? you remove position and velocity of particle i and you replace this I, Y and W so this is the new set so you take X1 so maybe I can write it's better this is a notation a little bit so X1 X I Xn V1 VI Vn so this is the configuration Zn the new configuration Zn I of Y and W is exactly the same configuration but here you put Y instead of X, Y here you take the rest of the configuration untouched here you put W so this is the new configuration you just remove particle i and the coordinates of particle the state of particle high and you replace it by Y and W so how do you I understand that W is sampled according to the Maxwellian, right? not yet, this is just a kinematical definition of the infinite generator later, later ok, so this is the configuration then the generator is this one then I construct the empirical Maxwellian centered in this point this point means that I compute the empirical field in this point in the point XI and as a function of the new variable VIT and I do the average as you see over this and here I remove the also I have the transition in position so let me explain what is this what happens for this stochastic process the process is the following I have this process at each there is at each time of intensity N so it is very frequent a particle is chosen with equal probability does 1 over N and perform a jump from its actual position which is XI VI to the new one XI till then VI till and both are extracted according the distribution phi as regards to the position and the empirical Maxwellian as regards to the velocity so here I move both position and velocity so this is the generator g that has the function and the low of the process of course I will assume that the initial low is pure product state particular independent and this is the evolution of the low you take the adjoint of the generator ok so the results are the following two steps first phi this meeting function and show that the particle system at fixed this meeting function phi converge to the solution of this irregularizer the BGK equation so this is the particle step and then you take you remove you make phi tending to delta at the level of the limiting equation and this is the pure PD problem and you show that phi converge to the real solution of BGK then you of course you can take the diagonal limit and go into infinity simultaneously to delta go into zero according to the estimates you have very very bad very not honest in the sense that the physics usually prescribes fixed low of that we are ignoring this diagonal limit means that you have to remove delta with log log log of n or something like that depends on the number of gronwell lem you apply in the proof of the particle proof anyway so the result is that if you take the vastest time distance of order 2 and you compare the J particle distribution function so under suitable hypothesis we shall discuss it later including that we include the fact that initially we assume factorized states or convergence to factorized states so the vastest distance between the J particle marginal of the n particle system my compared with the J fold product of the solution of the regularized BGK equation are indeed less than J's over n this is the typical time something which is very badly behaving with the delta the cutoff length this gamma phi depends on delta in a very bad way so this is the reason of log log n anyway the interesting part is the first one this is just to present the result to get we are doing the limiting equation but there is no physical reasons for having this if you do a real scaling limit the problem is much more difficult anyway so this part is purely analysis and I don't discuss it's completely standard we use the estimates of my paper with the old but anyway it works and the step 2 is just analysis when you compare this distance you have this this part is the analytical part this is the particle stochastic process part which is the point which we want to focus why W2 and not other norm well in this part we use weight at the L1 estimate and so it's much stronger the convergence L1 is stronger than W2 but here we use the lower large number which of course doesn't work in L1 so we need weak convergence it is topologies of such two steps are not agreeing actually I hope to use the vastest distance is important because it's a natural when you use the technique of coupling the processes what we are using here and if you want to recover strong norm you have to take strong norm you replace the Euclidean distance with something stronger for instance L1 is equivalent to the vaster stay in distance with the discrete metric one and zero zero went to point in the same x is equal to y and one otherwise and if you define the vastest distance in this way this topology is equivalent to L1 so I hope to work in the and this is very natural for the stochastic process but this doesn't work because we need to use the lower larger number I'll try to convince you and for that we need weaker topology so it's something technical why W2 and not W1 or something else this I don't justify W2 is there are more result with respect to the other vaster stay in metric but in the present context doesn't play any role W2 in particular ok so here is a sort of mean field but the interaction is not binary so we cannot use hierarchy I mean the hierarchy can be used in the other cat's model but not here because one particle interacts with the rest of the world so we the other possibility is to apply always the lower large number but in a different way so to give a very rough idea you take the one particle marginal against a test function you take the derivative so you have the derivative of the low because you integrate of all other variable but the first one and if you do this you have the good part which is the stream part here you should have n but n minus one term so this type are exactly compensated by the gain part because the gain part doesn't do nothing on all particle different by one so you have n minus one minus n minus one so it remain minus one here so you remember the generator here you have n here but suppose that this function depends only on the first particle this term says something is not the identity only when i is equal to one the n minus one term when i is different from one I reproduce exactly g because this configuration doesn't change because I'm changing a dependence which is absent and here I can integrate over dv I get one here and one here so I reproduce exactly g so I have n minus one terms which compensated this one this explain why I have here minus one and this is the gain part now I cannot manage with this but I hope I crossed the finger I say if fn if fn is something which is going to factorize if would factorize then I can use a low large number saying that this empirical distribution converges to this and so this Maxwellian converges to this now this Maxwellian doesn't depend on all the configurational part of zn different from x1 so this means that this term I can integrate over all other variable reducing everything at the dependence of one particle so the low large number give you the dependence on all other particles transform these in dependence on a distribution function and distribution function and the dependence is only on the first variable and so I write this term in this term in this way I don't know that I am allowed to do this and so at this point you see that this doesn't depends on x1 v1 anymore and so I integrate in v1 I get the density special density and dx1 I can integrate because now is free and this give rise one and I get this one that is the weak form of the bgk so the output of this computation is that we don't have hierarchy we don't have binary collision but if I take, if I look at the behavior of particle 1 if I take another particle, say particle 2 the action on the particle 2 on the particle 1 is somehow trivial because it influence the motion of particle 1 through the fact that particle 2 contributes to the construction of the empirical field but as 1 over n so particle 2 has a very weak influence on particle 1 so I expect that particle 1 and particle 2 are morally independent so it remains to put in a concrete rigorous proof of this convincing argument so this is what we do I want to give you a sketch of the proof because it is my opinion is interesting maybe that we can do now or comment let me do now comment later so I can write here so the technique of proving the convergence is a coupling technique so I consider the non-linear process given by the log I consider n independent copies of this process and I try to couple this with the n particle system process the consequence is the following generator I write it here so I take Zn is the configuration x1 v1 xn vn of the particle system and then I get sigma n y1 v1 yn vn n independent copies of the non-linear process I don't write again this generator because it's part of what I'm writing here L of q is a process on the on Zn sigma n and a test function g of Zn sigma n is defined in this way so here there is the stream part so I have vn dot gradient of xn plus wn dot gradient of yn so sigma n this is xn vn and this is yn wn minus n this applied to g and computed in the configuration Zn sigma n now this is interesting part plus sum over all i from 1 up to n so the idea is I try to make a simultaneous jump for the 2 in order to optimize the coupling and to do this I introduce the new variable dx tilde i v tilde i dy tilde i dw tilde i then I have a function phi of xi yy of xi tilde yi tilde times a function script m of phi of xi tilde vi tilde yi tilde wi tilde and then I have g in which I replace as before the particle i and I put the particle i with the configurations xi tilde vi tilde and I do the same for sigma n with yi tilde and wi tilde so I have to specify what is this I have to specify what is this well, this is our joint representation of the 2 Maxwellians is clear what is the joint representation is distribution in the product space if you take the first distribution the second margin you get the second one when you define the vasterstaj distance between two measures you take a joint representation you take the distance of the point in which you compute the joint representation and then you take the infimum over all possible joint representation this is the definition and you take the optimal joint representation between two Maxwellians it exists I am working in W2 that is the optimal one but it works also for W1 and here is also an optimal representation between the two phi basically means that I move x i and the point y y I say it in words that is better I think by the same amount and then I take the delta function so they are close as much as possible so what I am doing I take simultaneously I take a single exponential time the two pairs of particles initially the law of this process is exactly delta function of the product so the configuration are with probability one the same the same but at some point they can change but I try to make the minimum changement possible to optimize the vasterstaj distance and what I do is I take the best representation after a jump the two particles which initially are at the same position with the same velocity forget the position for the moment look at the velocity the velocity change because they are jumping according to two different Maxwellians and so a joint representation of that so you know if they are centered at the same you have to take the infimum of the Maxwellian and the rest is distributed according to the product you make an error which is as small as possible if the distribution are close but the distribution are close if the distribution are close if the function g is close weekly to the actual configuration of the particle system in other words it works the lower large number anyway one has to do the computation and what do you find one question so when you do this optimal coupling of the two gaussians they are not jointly gaussians they won't no no no I don't know what is this joint representation but I take the average I use I know that it exists something that optimizes I don't know what is but in my context what I produce my theorem which says that if I compute the difference I get the distance between the velocity and the temperature nothing else but the temperature of the Maxwellian and the so I can say you this fact that the integral of m of phi d x d y d v d w x minus y v square plus v minus w square is equal to u of is I have to take to take the modifying sorry because so what I'm applying is that forget the position because here the position are modified I have just two Maxwellian what I'm applying is the following I have just two Maxwellian what I'm applying is that the difference is equal if this is the optimal to the difference between u let's say u1 the mean velocity of this and u2 the mean velocity of the other Gaussian u2 square plus so this is at the square plus the square root of t1 minus square root of t2 square this formula I use this formula it's clear that if the Maxwellian have the same covariance the velocity in distance is the distance of the centers if they have the same centers it's this how to realize this is not difficult but it's for in the first case it's just a translation for the other is the dilatation but I ask was that the joint they are not joint in two dimensional or two d dimensional Gaussian you can't realize why two dimensional in any dimension ah okay yes twice the v dimension the w so I use this and if I compute the if I compute so in my context the quantity I want to estimate is this one is i n of t is equal to the integral of rn so the initial low is a delta function over a product state it evolves according to the generator actual to the joint of the generator and so this is I define this x1 minus y1 square plus v1 minus w1 square actually according to the lower torque I should symmetrize this quantity because when I take the derivative I find other difference but everything is symmetric in my context so no problem and then if I take a dot of n of t this is less than e n of t that is good plus the integral of rn that is the integral of this c of this form function phi of c and then I have u phi n x1 minus c minus u phi y1 minus c to the power 2 this is the square of the vast distance plus this other term similar term for the temperature that is more difficult to undo with now I want to just to remark that here instead of this you can add and subtract u of phi of n which is the empirical field on the y variable so from one side I try to reconstruct x minus y so I reproduce a term like this on the other I have the field computer in the average and the empirical distance but this is independent process so I apply the low-large number so this is the idea the idea is a coupling plus low or large number on the independent process so since my interacting process is morally close to I mean it is a self compatibility argument to an independent process I try to charge the part of low-large number on the independent process and the rest is a difference which is what I have to estimate so at the end I get grown one morally there are a lot of other term due to that the paradox is not so simple I am saying but conceptually simple in dot is less than constant time I arising from there or u x minus u of y plus term order one of square root of n due to the low-large number so you can achieve now let me comment if you have under five minutes yeah this is not so satisfactory at all because first we don't have understood much about the possibility of giving a particle sense to the BGK equation this is a sort of Monte Carlo method for the BGK but nothing more and there are some hypothesis that I don't like in our paper the function phi is bounded from below the fact that phi may be zero when you compute u is the sum over all j of phi times v divided by the sum over all phi if you have a local vacuum vacuum you are in trouble you cannot estimate the elliptics bound and so on in the first in the first paper let's assume that is not nice because you know upgrading your state according to what you see around not everywhere longer range interaction doesn't enter in the gate so this must be removed and we are doing this it's possible the system behaves in a rather homogeneous way for which the number of particles in a bowl is constant less than one times n and below this is possible to be proven so this it's not satisfied and then this is also interesting we want to remove the fact that lambda is independent of rho if lambda is dependent of rho it is another problem in defining the process because this means that the typical time of one system is different because one jumps with the rate lambda rho 1 and the other with the rate lambda rho 2 so one way is to take you make the two processes jumping with the same prescription with the minimum of the two lambda and then you take you move as independent process the other or you adjust the time it's very complicated it's a non-linear way of doing this so this is sort of a work in progress it's not not obvious but I don't know if the BGK can be justified in terms of scaling limit not apparently not it seems to be heuristically found but maybe that one can describe in terms of the typical scale in a more precise manner than what I presented at the beginning how to derive the BGK this is something interesting and it's not academic because recently I realized that there is a lot of papers on gas mixture using BGK as intermediate between the hydrodynamics and so and what is very strange in that also in the mathematical literature the BGK model are very different from each other so in absence of precise prescription actually I think because there are various scale of time in the sense that if you put a mixture of gas and whatever is the BGK at some point you have a global thermalization but you are interested in intermediate regime in which the two species are not completely separated and there is probably there is a scale of time in which each species thermalize at this point the people to try to describe this I'm finished in when you thermalize separately and before going to global thermalization this is unclear but if you don't control what happens with the single species you know hope so this is the program to try to understand a little bit better what happens when you have a mixture ok thank you thanks sir actually I was not looking at the clock no yes yes yes yes but I mean yes yes time for questions no I can't see the main theorem at the moment but you don't rescale this modifying function of phi n to be a DRU in the in the limit n going to no in the limit I mean I do two separate limit and then I take the diagonal limit ok but you could also do it yes I could do in principle but yes I could do but this is not very honest so that I do what I said then I say what is the dependence of delta from n on n and then I formulated the theorem in this way but the reader cannot understand why I would like to ask I mean in the kind of original BTK model of some of the main quantity you control is the entropy and I mean as far as I consider now on this our notes in the stock we have not used it at all at any stage did you try it? I mean the BTK has all the conservation law and the H theorem and so are exactly the same hydrodynamics than the Boltzeman equation which is not by the way the true hydrodynamics but the hydrodynamics of low density gas and is the same absolutely the same so in some sense one can justify that in a regime in which the main free path is very small both model converges to the same thing that is a local Maxwellian but it's not do this because you have no control on the V it's not the distribution it's the average which converges in this sense so the equation itself you have no control on that. Is it efficient? It should take longer to be one. Now you say that now what is the hydrodynamics yeah you write good good good yes with the Boltzeman I get the lower perfect gas in the Euler equation let's take the compressible regime for that you say that lambda of rho creates a sort of correction to the not to the Euler no no no I'm talking first to Euler not correction, Euler is the same no I mean yeah yeah yeah no not talking both in compressible limit I don't know by the way technically speaking for the BGK I'm not aware of analysis of the hydrodynamical limit because since the interaction is not by linear the Hilbert's espancho doesn't work I know that you did it what you did the incompressible right scaling for the BGK is preparation for the Boltzeman I remember the question was whether you used the entropy in this no I mean maybe that ok so from particle to kinetic equation if it is Hamiltonian there is nothing to do you have to work for stochastic system there is the varadan method which is very effective I don't control it so it is the varadan method I should ask to Rezan Kalu if you can have but in this case since the interaction is not binary I'm not convinced that maybe that the entropy method can work, yes I don't know the answer so if I understand one important point is that you are not binary but as usual that they depend weekly on each particle can you quantify in which sense then it must depend weekly so that your scheme works and if I'm thinking of another maybe a static equation in which sense do I need the interaction depends weekly on each particle for the scheme to work I mean there is the rate in front is 1 over n I mean given one particle it has a jump finite number the jump for finite time so this is the main point the j-dependence the correlation you say the order of correlation in which sense it converges to a function on just the empirical measure or maybe it's not a relevant question I'm not understanding the convergence to the solution of the limiting equation or what you have an interaction which is a function of all the positions and it depends in some sense weekly on each one so it converges to a function just of the empirical measure why you say that it depends weekly I mean the behavior or given particle depends weekly by the behavior of another target particle that is different that the quantity under the interest are weekly dependent on the states of given particle it's not true I mean what I said before is that if I sit down over a given particle if a differential is one particle just one the difference is small I say it's a mean field so essentially you assume that close to any so in each ball you have a microscopic proportion of your n particles so you have something of the order of n particles and then you say that you have this jump process which is just like cats count one over n so it's mean field it's no low density or something like this works all good so we can close the session of this morning and thanks to most speakers