 Let's talk of the morning. We have the second lecture of Laurent de Villette about collision in Plasmas. So thanks a lot. And so I will try to speak at least a little of Plasmas and of collisions today. But first, I would like to finish the presentation of the example based on reaction diffusion for the entropy method that I began yesterday. So let me recall briefly the set of equations that I wrote down. If you remember, this is related to the reversible chemical reaction between two chemical species, which we will call A and B. And we defined the concentration of those species by small a and small b. So those are two functions of t and x, which are non-negative. t will be typically a time which is in r plus. And x will belong to a smooth bounded set omega, included in rn. And of course, for real applications, n is equal to 2 or 2, 3. Moreover, I suppose that there is a catalyst in the reactor and that its concentration is a function of x. And what I will suppose on k is that this is strictly bigger than some k0, which is strictly positive. So that varies. The catalyst exists everywhere. But its concentration can be bigger in some parts of the reactor and not that big in other parts. Then the set of equations that I wrote down is a set of two reaction diffusion equations, which writes like this. So the first equation concerns A. And you have a diffusion rate, which I will call d1 for this species. And then the reaction term looks like k of x times b minus a. And so for the second one, the concentration is denoted by small b. It satisfies also reaction diffusion equation, but with a different reaction rate, which I will call d2. And the reaction is just given by the opposite of the reaction term for the first one. So of course, this could happen. It really depends on the chemical situation. For example, if the catalyst is a solid, then it will be fixed somewhere in the reactor. And then it is a given function k of x. So that would correspond to what I'm writing now. But of course, you could imagine situations in which a catalyst is liquid or even in gas phase. And then, of course, it would follow the flow somehow. But if you are in such a situation, typically you would rather write down convection diffusion equations rather than just diffusion. So everything should be coupled with typically a Navier-Stokes equation for the solvent that you have. So this, of course, is one of the simplest possible models coming out of chemistry. And of course, it can be like, no, no, it's just, OK. So together with the equations, since we suppose that everything holds within a given domain, we should add boundary conditions. And the natural boundary condition, if you want to study the large-time behavior of the system, consists in supposing that the species are confined inside the reactor. And this corresponds to putting homogeneous Neumann boundary condition for the system. So we add something like this. So on the boundary, the scalar product of the derivative of the gradient of A with the normal n to the boundary is equal to 0 and the same for B. So these are the conditions which enable to do without difficulties, integration by parts, that I will perform in the sequel. OK, so let me finally add that, like the concentration of the catalyst, I will suppose here that both D1 and D2 are strictly positive, which corresponds to a situation in which basically A and B are both liquid, let's say, inside the solvent. So when you look at this set of equations, you can wonder what are the natural apriori estimates that you have. And the first one is that since A and B are concentrations, they have to remain non-negative. And I think it's clear here that thanks to the maximum principle, A and B will remain non-negative if it is true at time 0. So let's suppose that A at time 0 and B at time 0 are non-negative, and this will be propagated with time. So this is just a direct application of the maximum principle for both equations. And also, it's quite clear that because of the shape of the reaction term, if you add the two equations, you get 0 on the right-hand side. And so since D1 and D2 are not necessarily equal, when you add the left-hand side, you end up with something like DT of A plus B minus laplacian of D1A plus D2B is equal to 0. And if you want to get something out of it, at least at the beginning, the thing you have to do is just integrate in X. And in this way, you get that A plus B integrated in X is conserved, which is somehow the conservation of mass, if you wish, for the chemical system. So it's clear that D over DT of the integral of A plus B, this is equal to 0. And so without loss of generality, I will assume that omega is of Lebesgue measure 1. And initially, the total mass is equal to 2. So if it's not the case, it's possible to do simple transformations which enable to come back to this setting here. So the reason why I put 2 here is that what I wish is that, at the end, the equilibrium is exactly A infinity is equal to 1 and B infinity is equal to 1 in the domain. So in order to see that, let me now describe the entropy structure related to this problem. So here, the natural entropy is A log A plus B log B. This is the entropy which is given by physics. But it's easier to start from an entropy which is simpler, which is just the integral of A square plus B square in this case. So let's define H of A and B as the integral of A square plus B square, the X, that is the L to norm to the square of A and B. Actually, let's divide it by 2. So if I compute now the H of A of t and B of t, let's write it this way. And let's suppose that A and B are solutions to the reaction diffusion system. Then as you can see, what you will get is A dA over dt plus B dB over dt in the integral, like this. And using the equation, you see that this gives A1 A laplacian A plus d2 B laplacian B plus what comes out of the reaction term. And here, you see that it gives K of X times A times B minus A minus K of X times B times B minus A. So you get A minus B here. And one way to write it is just to say that this is exactly B minus A to the square times K of X, like this. And in order to see that this quantity is non-positive, it's enough just to perform the integration by part at this level. And it's here that it's useful to have imposed the homogeneous Neumann boundary condition. So what I get is minus d1 gradient A to the square, minus d2 gradient B to the square, minus integral of K of X times B minus A to the square, like this. And this I call minus d of A and B with the usual notations, which means that this is the entropy dissipation for this problem. So as you can see, we have an entropy dissipation which is basically made of two parts, which are completely different. One part is related to the gradient. You can see it here. And another part is related to the reaction. And you can see it here. And there is no gradient inside. So this is a situation that you can obtain in a systematic way when you start from systems coming out of reversible chemistry. Whatever the number of species is, whatever the number of chemical reactions is, you get this structure. But in that case, in the general case, you better start with the entropy with A log A plus B log B rather than A square plus B square. Here it's simpler to do it this way. So let me add that at this level, in order to enter really exactly the structure that we described yesterday, we have to impose the functional space in which this is holding. And this functional space will be what I usually call E. And it will be made of functions A of x, which are non-negative, B of x, which are also non-negative. And such that because of this identity here, which is a conservation of mass, we will impose that the integral of A of x plus B of x is equal to 2, which corresponds to the initial data that we have here. Integral of A of x plus B of x. So that is the space E. So we already know that we have an entropy structure. Let's now check if this entropy structure is strict in the sense that I defined yesterday. That is, let's suppose that day is equal to 0. And let's A and B belong to E. Let's look at what it means. As you can see, the fact that this quantity is equal to 0 will mean that all the different terms are equal to 0. And since you supposed, at the beginning, that d1, d2, and k are strictly bigger than 0, this imposes that gradient A, gradient B, and gradient B are equal to 0. And moreover, A is equal to B. So in other words, you have for A and B two constants. And those constants have to be the same. And since you know that you have this equality here, remembering that we took a space omega, which is of the big measure 1, this imposes that A equals 1 and B equals 1, which is our equilibrium here. So the equilibrium will be defined by this. It's clear that if we suppose that we have an equilibrium for the equation, which means the solution of the elliptic system, which is here, just by multiplying by A is the first one, by B is the second one, and by integrating, I will get, this is exactly, by the way, the computation which is done here, I will get that day of A and B is equal to 0. So AB equilibrium also implies this. And it is clear that if A equals 1, B equals 1, then you get an equilibrium for the system. So actually, all of this is equivalent. And we get the strict entropy structure provided that H of A and B is the minimum at point A equals 1, B equals 1 of the possible H of A and B for all A and B. So in order to check that, let's define A as 1 plus something and B as 1 plus something. Let's define A as 1 plus alpha and B as 1 plus beta in such a way that because of this, the integral of alpha plus beta is equal to 0. Now alpha and beta are not non-negative anymore. So this is just a change of variables. So now if you write H of A and B in terms of alpha and beta, this is just the integral of 1 plus alpha to the square plus 1 plus beta to the square dx. And as you can see, just by expanding the second order polynomial here, you have the first term, which is just 1 plus 1, which gives you 2 because omega is of Lebesgue measure 1. So I get 2. Then there are the terms of the square. We just give you alpha square plus beta square. And finally, there are the terms 2 alpha plus 2 beta. But because the integral of alpha plus beta equal to 0, those terms do not appear actually in H. So that finally, I think it's clear on this formula that if alpha and beta are equal to 0, you recover exactly the equilibrium, A equal 1, B equal 1. And if they are not equal to 0, then this is strictly bigger. So this is strictly bigger than H of 1 and 1 if alpha and beta are not equal to 0 in the following sense. The couple of alpha beta is different from 0, 0. So we have a strict entropy structure thanks to this. So now, if you remember, next step consists in checking whether or not varies an entropy-entropy dissipation estimate. And let me write down what it means here. So do we have an entropy-entropy dissipation estimate? So since I'm lazy, I will systematically write EED. So in order to check that, let's write down what is the entropy dissipation and what is the relative entropy here. And actually, it's clear that for the relative entropy, it's better to use the variables alpha and beta. So let's write it in terms of alpha and beta. Let's say D of A, which is 1 plus alpha and B, which is 1 plus beta. Of course, for the gradients, it's the same to consider alpha or A and beta or B. So I will just write down D1 integral of gradient alpha to the square plus D2 integral of gradient beta to the square. And also, for B minus A, it's the same to write beta minus alpha. So let's write here. So this is D. And now what we have to know is whether or not this is bigger than some constant. Let's call it C0 as yesterday times the relative entropy, which is just H of AB minus H of 1 1. But H of 1 1 is just 2. So here, H of AB minus H of 1 1 is just this integral of alpha square plus beta square. And as you can see, the question becomes a very simple question of functional analysis, which is, is it possible to bound somehow the square of the L2 norm of alpha and beta by basically the H01 norm of alpha and beta plus something which relates alpha and beta. And this, of course, can hold only under the condition which is written here, which, in terms of alpha and beta, can be rewritten like this. So as you can see, this is just a sort of functional inequality question. And let me briefly explain, it actually takes four lines to just solve this. But I want to write down precisely those four lines because it's really the core of all the computations which are done for more complicated models. So let's try to proceed in a rational way. First thing which one wants to do is to take the infimum of d1, d2, and k in order to eliminate all parameters in the inequality. So here I take the infimum of d1, d2, and k, or if you prefer k0 with the notations that were introduced previously. And I take here the integral of gradient alpha to the square plus gradient beta to the square plus alpha minus beta to the square. This is a natural thing to do. Next natural things to do is to use Poincare inequality in order to transform the gradients in terms which directly involve alpha and beta. And here, since we are in the context of Neumann boundary conditions, the natural way to do it is to use the variant of the Poincare inequality, which is sometimes called Poincare-Wiertinger, in which you save the gradient to the square. So the h01 norm, if you wish, is bigger than the l2 norm of the function minus its average value on the domain. So this is Poincare-Wiertinger inequality. So up to a constant, which I will denote by p of omega, which is related to the Poincare constant of the domain, I can transform this equation like this. So remember that the domain is still of the big major 1, so the average is just the integral over omega of alpha here plus the same for beta like this plus alpha minus beta to the square. So in this way, I eliminated the gradients and I replaced them by the distance between the functions and their mean value. So as you see now, the whole thing is to understand why if you have two quantities which control the distance of functions towards the best constant, which represents a function on one hand. And on the other hand, something which tells you that the two functions are close one to another, you want to see why this implies in some sense, that you are close to a situation in which both the constants are close to one. OK, I don't know if it's clear. So at this point, actually it's a rather better idea to start from the other direction. That is to start from the integral of alpha square plus beta square and try to bound it from above. So let's try to do this. So now I look at the other side of the inequality, the integral of alpha square plus beta square. And at this point, the natural thing to do is to try to replace the functions by constants. And I know that I will, in some sense, allow to do that because I control both quantities. So here the natural thing to do consists in replacing alpha by alpha bar, that is this quantity, plus alpha minus this quantity. Since this is done with squares, there is a tool which will appear in the inequality. And I will get something like this. I hope this is understandable. I just used the inequality. Let me write it this way here. So I just use that alpha is alpha minus its mean, plus its mean, plus its average value, like this. And then I just say that alpha to the square is less than 2 times this quantity to the square plus this quantity to the square. And I did the same for beta. OK? So here you see that we are in a rather good shape in the sense that the quantities which appear here and here are controlled by the entropy dissipation here. And so in order to get a constant which bounds this in terms of this, I just need to show that those two quantities which are now numbers are controlled by the rest. You agree? And so this is the last step of this very short proof. So let's start from those quantities which are numbers. Now the natural thing to do, since we control alpha minus beta, is to transform alpha and beta in alpha plus beta and alpha minus beta. OK? So if I do that and I take care of the constants, I can write exactly this. So I hope I did not miss anything with the constant 2 appearing in the equation. And now it is almost finished because, sorry, I want to be a little careful, it's like this. Because the quantity which is here is exactly equal to 0. And it's there that we use this constraint in the problem. So this is 0. And here, as you can see, you have the integral of alpha minus beta to the square. And what you control is the integral of alpha minus beta to the square. So this is just Cauchy-Schwarz, which tells you. You just use Cauchy-Schwarz. And you get 2 times alpha minus beta over 2 to the square. And I hope it's possible to follow everything. But this finishes the proof that there exists a C0 which works. And as you can see, this C0, it depends on the diffusion rate, D1 and D2. It also depends on the minimal amount of catalyst that you have put in the reactor, that is the minimum of k of x. And it finally depends on the domain because you use the Poincaré constant of the domain. So let's say all of this is completely elementary, but is really what is in the core of all the proofs which are related to reaction diffusion systems coming out of reversible chemistry when you try to prove that you have exponential convergence towards the steady state. In this specific case, since everything is linear, it's really possible to do some Fourier analysis and to get actually the real best possible constants. And it's quite interesting to check how it fits with what is given to you by this very simple proof. And this has been done in particular by Clemens Felner. But this proof does not really use very much the linearity of the system. And it can be extended in many, many different nonlinear systems of the same form. So I will then put a bunch of names of people who have contributed to this field. Let me first say that probably the first group which really tried to do this systematically is a group of former East Berlin around Gröger, Hünlisch, and Glitzky. What did I want to add on this? I think maybe that's enough on this reaction diffusion system. Yes, one last word about the fact that an interesting question is whether you can remove part of the hypothesis that I made. That is, can you, for example, suppose that D1 is equal to 0? Or can you suppose that K can be equal to 0 in part of the domain? And actually, this is all those questions which are quite interesting are directly related to controllability issues for reaction diffusion systems. So you can find the answer to that kind of questions in books on controllability of parabolic equations. So there is a link between the two fields in some sense. OK, so maybe it's time for me to give a few references about this part of the lecture. This is really a huge field, and it's not possible to quote everyone. But I wanted to quote a few groups which worked on these entropy estimates, first by saying that as far as kinetic equations are concerned, there was really a lot of work done on Fokker-Planck type equation. And I think that it was clear from the talk of Anton that we saw yesterday. And certainly, the work of the probabilist in the 80s of Bakri-Hemri is one of the most influential work in this domain. And I also wanted to quote actually this paper, which was, I think, already quoted by Anton yesterday, in which everything is written in the language of PDEs. And it's, I think, one of the most influential works of PDEs. And it's really very useful for people who do not come from probability to understand what happens. What I would like to present in the next talks is really works which are related to the papers here, which concerns the Boltzmann and the Landau equation. So I will not present it immediately. I will say more later. Let me add that there were works done on the coagulation fragmentation Smoluchowski type equations, which are also part, let's say, of kinetic equations, which are maybe not as well known as the works on, let's say, rarefied gas models. But I want to quote them. So there is one by Eisenmann and Bak, which is work from, I think, the early 90s, or maybe the late 80s. And this work has not been cited a lot, because probably the title was not exactly coherent with the, let's say, modern vocabulary on the topic. But I think it's an extremely interesting work, and I absolutely wanted to quote it, and also the more recent paper by Jabin Nidhammer on the domain. So this was more to explain what happened on kinetic theory. But I also wanted to say a few words about parabolic equations. And so this last example corresponds to the setting of reaction drift diffusion. I already quoted Glitzky, Gruger, and Rünlich. And I worked also a lot on this field with a Clemens Felner in the past years. But there are also works on completely different equations or systems. And I think it's interesting to see that there are a lot of works done on nonlinear diffusion. One of the first ones was done by Delpino and Dolbo. And there are also many works done on equations of higher order, like Allen Khan and that kind of equation. And I wanted to quote at least this paper by Kaceres Karyotoskani. But there are many more recent papers on the same subject, which is, I think, quite interesting. Anyway, this is just a small sample of many, many works which have been done on the question. For the method itself of entropy, what I presented yesterday, I think it's very hard to find what would be the earliest reference for this method, because it has been used probably for many, many years without using necessarily exactly the same vocabulary. But I would say that this method was already quite well known in the 70s for PDEs. And for ODEs, I think one should go back maybe to Lyapunov himself to find, let's say, the first works on this topic. So I preferred maybe not to cite people directly on this, because it's quite hard to find who first used it. Maybe I will stop for just a few minutes and ask you if you have questions on this first part, because now I will present a quite different topic. Yes, please. Maybe just about this kappa of x. So you just need it to be positive on a set of positive measures inside of omega, right? Yeah, more precisely an open set, a small open set. Open? I think one should check the latest papers on controllability, but I think I'm sure that it works if it is an open set. Then maybe it's enough that you have a strictly positive measure. Another question. So you mentioned that from the physical point of view, a log a plus b log b would be more natural as an entropy. So what happens if you redo the same computations with that entropy? So what kind of? So actually it's what I should have done in reality. So it's just that with using this simpler entropy a square plus b square, it's really easier just to present it in this specific case. But this is typically something which cannot be extended for more complicated equations. If you start from the real physical entropy, which is the one based on a log a plus b log b, what will happen is that first you can treat many more systems. And then the main difference is that when you compute the entropy dissipation, you will typically get a fissure information instead of an h to 0, 1 norm to the square. So you would have to prove some log sobole of inequality at that stage of the? So sometimes one of the possible tools is a log sobole f. But also let's say that it's also sometimes useful to write it as a gradient of the square root of a to the square up to a constant and then to work on the square roots. Because the reason for that is that instead of getting a minus b typically in the reaction term, if you use a log, you will end up with terms like a minus b times log a minus log b. It looks more like this. And then there is a natural inequality, which is that this is bigger than a constant, which is actually maybe 1, but I'm not completely sure, times square root of a minus square root of b to the square. This is a very easy computation. And so since you can write the Fisher information in terms of the square root and also the reaction term in terms of the square root, sometimes a good idea to work with the square roots. Well, the price you have to pay usually is that the constraints, such as the one which is written here, is not so nice once you use square roots, let's say. And then just to find a question. So if you tell the story this way, it seems that the Poincaré constant of the domain is kind of crucial to have convergence equilibrium. And this, I mean, is not clear because that's just this proof, right? So somehow, if the domain has spikes or? Yeah, so for bad domains, let's say you will have bad constants, that's true. But maybe it's a better idea to use a different way of controlling the difference between the function and its average than to use Poincaré inequality. So for example, you could use Sobolev type inequality or you have, let's say, different type of inequalities which will control this. So you would expect that the convergence equilibrium is independent of the Poincaré constant, in a sense? No, no, I would not say that. But what I would say is that the Poincaré is maybe not the best tool for everything. But anyway, if the domain is very strange, you will have problems for controlling the difference with the average by the gradient. Are there examples? Look, yeah, take a halter, typically, a dumbbell, sorry. So take a dumbbell size type domain, something like this. OK. And suppose that this is very thin, then you will have some trouble for controlling the. And so typically, for example, you can have situations in which also the diffusion terms are, let's say, big here and small here or things like that. And you will have some trouble controlling this part with the. So if you're looking really for best constants, I think they will depend on the domain. Maybe not directly through the Poincaré constant, but they will depend on the domain somehow. OK, that's interesting. Anyway, it's clear that for I did not write this on the board I should have, but if you do not have a connected domain, then it's clear that it doesn't work. In cases like this one where you can have different entropy functionals, do you expect that the physical one will yield a better convergence rate? No, I'm not sure. Actually, I would say that in situations in which you can use the L2 norm, it's usually a better idea to use it. But there are not so many cases in which you can do it. And I would say that as soon as you have nonlinear terms, it's really difficult to avoid using the physical entropy. OK, thanks. OK, so thanks. So I have still 15 minutes to present quickly the Boltzmann equation. I'm not sure that it will be enough, but I will try. So the equations I want to describe now are models which were written respectively around 1860 and 1940. So the first one is a Boltzmann equation. And it's based on the idea of Boltzmann that it should be possible to describe a gas by looking at binary collisions between the different molecules of the gas. So this is the second part. So Boltzmann and Lando operators and their entropy structure. So the idea is that in the spatially homogeneous context that I will present, the unknown is a density of molecules which, at time t, have velocity v in a rarefied gas. So we also suppose that the gas is actually monoatomic. I will not say too much about this, but this is an assumption which is clearly not satisfied by the air, for example. So the idea of Boltzmann is that with this quantity here, actually, this was already done by Maxwell, seemingly, but well, anyway. So the idea was to try to compute the evolution of f due to the binary collisions. And the first assumption is that those binary collisions happen on a scale of time, which is extremely short with respect to the time, to the scale of time at which you observe the evolution of the gas. So let's say the time for collisions is very small with respect to the time of observation. Because of this assumption, we will write an equation which is autonomous in terms of ODE vocabulary. Or if you prefer, the time t will be a parameter in the equation. So we will say that df over dt of t and v is equal to a certain operator which acts on f at a given point v. And t here is a parameter in the operator. But the tradition is to forget about this abstract equation and to write it q of f of t and v, which I will do from now on. Then the second assumption is that all collisions are binary, which means that we do not take into account collisions between more than two molecules. And this is directly related to the fact that the gas is verified. If you imagine a gas which is quite dense, there will be collisions between more than two molecules, which will have an effect on the evolution of the density. So we look only at binary collisions. This is the second assumption. And because of this, the point is that you can write the equation in the following way. So you first have to take into account the possibility for molecules which had the velocity, let's call it v prime, but after a collision with another molecule, it will get the velocity v. So this is done in the following way. You look at the joint density of molecules of velocity v prime and v prime star. And you multiply by the transition rate, which I will write in this way, which tells you that after a collision between molecules of velocities v prime and v prime star, you end up with a molecule of velocity v. And you integrate over all possible molecules of velocities v prime and v prime star. And you also integrate over all possible velocities of the second molecule which appears in the collision against all of this. Let's draw a small picture of a collision. We have two molecules which have velocity v prime and v prime star. Here they interact very quickly. This is the assumption which is here. And after the collision, they end up with respective velocity v and v star. And this tells you the amount of production of molecules of velocity v. But you also have to take into account the possibility that you have a molecule of velocity v, which collides with a molecule of velocity v star. And after the collision, the velocity v is changed. And so you have to remove from this kernel the corresponding quantity. So this is done in the following way. So you still integrate over the same variables. But now you put here the joint density of molecules of velocity v and v star. And you look at situations in which after a collision, the velocity v and v star are changed in velocity v prime and v prime star, like this. OK? And now here, as you can see, I wrote this F2 here, which is actually not something that you can compute out of F. This is a joint density of molecules which have velocity v and v star. At this point, one needs an assumption on the fact that actually this can be factorized in F of TV times F of TV star. And this is related to the assumption of chaos. And when someone wants to really prove that this operator is related to the Newtonian physics, let's say, of n particles, one has to show that this chaos assumption is, let's say, is true, holds, starting usually from an assumption that it holds initially. This is sometimes called propagation of chaos. So here, this means that F2 of TV and v star is just the product of F of T and v star. So this is an assumption on the dependence of the distribution, the joint distribution with two different velocities. So once you do this assumption, you get something which is closer to an equation. That is, Q of F is now the integral of v prime, v prime star, and v star of F of TV prime star. P of v prime, v prime star gives v star, like this, minus F of TV v star, P of v star, v prime, v prime star. And all of this dv prime, dv prime star, dv star, like this. Next assumption is that there is micro-reversibility. So here, this corresponds to situations in which you had previously a molecule of velocity v. And because of a collision, it will get another velocity. So you have to remove it from the evolution. Q of F is just dF over dt. So you have to remove it. So next assumption is called micro-reversibility. And it is related to the fact that in a process like this, at the microscopic level, at the level of the evolution of molecules, this comes out from Newton style equations, if you wish, or if you prefer from quantum dynamics. But anyway, the equations underlying this process are reversible. And so the probability which are written here, or the transition rate which are written here, they should be equal. So micro-reversibility says that this is identical. Let me write briefly the result. So you get here F of t v prime F of t v prime star minus F of t v minus F of t v star, like this, times the function which is here. Next hypothesis is conservation of momentum and energy. So conservation of momentum does not cause any trouble. It's a sort of universal law of the universe, if I can say. Sorry, this is recorded. Then conservation of energy actually leads to a certain number of troubles. And it is also, of course, a universal law. But then it depends on what you call the energy. And the point here is that the energy is exactly the kinetic energy of the molecules only when the gas is monohatomic. If not, there is also energy in the internal degrees of energy, which is related to the rotations and vibrations of the molecules. So here we really use the hypothesis that the gas is monohatomic. So conservation of momentum and energy. So here kinetic energy because the gas is monohatomic. And this means exactly that this transition rate is nonzero only when those quantities are conserved. So another way to say that is that up to a certain function which depends on everything, this is multiplied by a direct mass on conservation of momentum and a direct mass on conservation of the kinetic energy, which I will write like this. Of course, this is to be understood in a reasonable way. And we will see then how to parameterize this set. Yeah. No, no, no. So those are assumptions which are completely different. And the last assumption is Galilean invariance, which means that you should get the same equations if you move your reference frame by a given velocity or if you sort of rotate it. And this implies that this quantity cannot really depend on v and v star independently. It has to depend on v minus v star because you can move the reference frame at velocity v. And in fact, on the modulus of v minus v star because then you can rotate it. So Galilean invariance implies that actually this function, which is sometimes called cross-section, that actually cross-sections means that you have, well, this is a function which is related to this one, to be precise. So this function has to depend only on the modulus of v minus v star. And then if you look in the reference frame, let's say of the center of mass of the molecules of velocity v and v star, then you can see that the velocity is then v prime minus v prime star. And thanks once again to rotations, you can see that then it can depend only on the angle between v minus v star and v prime minus v prime star. This is the angle. So actually the Boltzmann equation is written here knowing that p has to have this form and that b can depend only on both variables. So let me show on a slide the final result of all this procedure. Here you have the final result for the Boltzmann equation, for the Boltzmann operator. And I would like to insist on the fact that the notation, which is here, is slightly different from the one which is here. Here I use q of f and f. This is just to emphasize that actually this is a quadratic kernel with respect to f. So this is usually what you find in books. For the rest, I think it's really completely identical to what I wrote down here. I just also emphasize here that the integral here is on rn times rn times rn. n is typically equal to 2 or 3. And because of the direct masses that you have here, you can drastically decrease the number of unknowns here. Because here you have basically n conservation. So you can remove one of the integral if you wish. And here you have one extra conservation, which means that you can also decrease here by 1 the integral. So instead of having 3n, you can typically have 2n minus 1. But this I will maybe detail in the next talk. So I propose to stop here for this. OK, so let's thank Laurent.