 Zvuk je začal. Zvuk je začal. Vse je to. Zdaj je početno. Zdaj. Zdaj. Zvuk je začal. Zdaj. Spojte se. Zdaj je početno. Zdaj. Zdaj. Zdaj. Zdaj je početno. Zdaj. Zdaj je početno. Zdaj je početno. Zdaj, ne brimalim. Pljusha, skala activated, Detekno, pa nekaj. Bono. To deserve po ghzjantovana. To Soge, tu ne ispešno. I Pirnega na companionsi. Pomočite tako, ker je vamosov ono vulnerabilities, počkaja, res ležitak, in nekako škoda, in nekako škoda se pograba v podsunju, pa se včkej, je to postajne pričo, tako v fórmu, da se občasili. Nekaj, nekaj, nicit, nekaj pa se postajne pričo, nekaj moment, postajnji v piekri, nekaj, nekaj, ki je, če, when the entropy dissipation is equal to zero, we recover that F is actually a Maxwellian function of V. Maxwellian, if you remember, it means the exponential of a second degree polynomial. Specific type of Gaussian, specifically it's a Gaussian whose matrix that you have in front of the V square is a constant times the identity. OK, so we have to check this. Traditionally, this is called the second part of Boltzmann's h theorem, so I think it's proposition 6, but I'm not sure about the number, so this is sometimes called the second part of Boltzmann's h theorem, so let's state the following. If the function B, which is sometimes called the cross-section, which appears in the equation at the end just before the integration variables here, is strictly positive, then the only functions for which the entropy dissipation is equal to zero are the Maxwellians, which are defined, as I said, by M is A plus, sorry, M is the exponential of A plus B scalar V minus C V square, or V square over 2, if you prefer. And actually the same holds for the Landau equation. I will not write down immediately what is the entropy dissipation for the Landau equation, we saw that in last lecture, and I will show it to you again in a few moments, but let me see immediately that the same property holds for the Landau equation. And as we will see, this is really the central part of the strict entropy structure of Boltzmann and Landau equation. So let me present the proof. So first, as this is called the second part of Boltzmann's HVRM, you can guess that this was already proven by Boltzmann a long time ago. One has to be a little careful there, because of course at the time of Boltzmann, people were not that interested in smoothness conditions on the function f. So the weight is written here, in principle, if you want to make a completely rigorous mathematical statement, in which space f is leaving, and you should first check that the quantity d of f is defined where f is leaving in the space you are speaking of. So of course Boltzmann was not that interested in that kind of things, and so when you look at his own proof, you can see that basically he was assuming that f was, let's say, of class C2 and to be more precise, it is logarithm of f, which has to be of class C2 or C3, which means somehow that you suppose that f does not touch zero, typically, let's say. So I will try to give you a proof, which is based on the proof by Boltzmann first, and then the second proof. In the proof, which is based on the proof by Boltzmann, as you will see, we will take a lot of derivatives, and I will comment at the end about whether one is entitled to really take that many derivatives or not. So let me explain this proof. So you start from this quantity being equal to zero. Now if you look at this quantity, this is the integral of something which is non-negative because the logarithm is increasing, like x minus y times log x minus log of y, and moreover, you have inside those two direct masses, which sort of fix the space on which the integral is really living. And so this exactly tells you that this is zero if and only if f of v prime times f of v prime star is equal to f of v times f of v star whenever the equalities which are written under the direct masses are true. So the first thing to observe is that this is equal to zero implies that v plus v star equals v prime plus v prime star plus equality of the kinetic energy and of the momentum implies f of v f of v star is equal to f of v prime f of v prime star. And this is due to the fact that we suppose that the capital B is strictly positive here. So this is a sort of global statement which is a consequence of the entropy dissipation being equal to zero. I hope it's clear from the formula. And then this exactly tells you that f of v times f of v star can be a function only of v plus v star and v square plus v star square. So this tells you exactly that this can depend only on those two quantities or the same with divided by two if you prefer. I hope so, it is also clear. If not, you can refer to the so-called factorization theorem in set theory which is something which is proven in like five lines no more which tells you exactly this. But I think it's something which can be understood directly. I think it's a few things of it. So the first steps consist in writing some kind of functional equality or functional equation if you wish which tells you that the tensor product of f at two different points depend only on those two variables here which are momentum and energy and you have to start from here. Now imagine that instead of having this you have the same but only with the momentum. Then you could say that well in that case basically t has to be f because you take v star equal to zero so up to a constant t is f and you would end up with the traditional functional equality f of x times f of y is equal to f of x plus y and you would deduce from this by the usual methods that f is actually an exponential of something of lambda x, let's say. So this, now what I will show now is just a generalization of that. It's more complicated because now you have those two terms here. So the idea here is actually to try to find a good differential operator which transforms the function of v plus v star and v square plus v star square into zero without transforming too much the functions which do not depend on those quantities. Now once again, suppose now that t would depend only on this. This would correspond in terms suppose that v and v star are just one dimensional this would correspond to looking at functions which are radially symmetric in the v v star space. So the right operator to cancel it would be d over d theta in terms of polar coordinates which you can write up to a multiplication as let's say x, y cross product d over with a gradient is more or less the same as d over d theta if you think of it. Well anyway the point is to try to find the right one for those functions here and actually what happened is that the good idea is to look at something which looks like a little like d over d theta in terms of polar coordinates but to take into account the fact that here this has to do with translation in the space of v v star which has to do with Galilean invariance. So you try to get an operator which looks a little like this over d theta but which takes into account this translational invariance and the good choice is actually this one. So you look at the operator which is a cross product v minus w and gradient v minus gradient of w. So let me first here change notations and take notations which are compatible with the Landau equations so I will write w instead of v star from now on. So what I say is that I use this operator here L is defined as v minus w cross product with gradient v minus gradient of w and if you prefer to use coordinate this is matrix which has for coordinates v i minus w i d over d v j minus d over d w j minus v j minus w j d over d v i minus d over d w i OK? It's what is written actually here. So let's check that this is indeed a good operator. So let's take L i j and apply it to functions which depend on this. Notation is not very precise but I think it's possible to really understand what happens. So first I multiply v i minus w i by the derivative respect to v j of this quantity now this quantity it depends on v through this one and through this one so you get here the derivative of t with respect to the first variable this variable is vectorial so let's call it gradient 1 t this is the derivative respect to the group of variables which are here taken at same point plus 2 v v j sorry the derivative of t respect to the second group of variable which consists of only one variable and which I called therefore d t over d 2 I hope that the notation is at least vaguely understandable. OK? So it's like this and then I remove exactly the same quantity when I have exchanged g and a so v j minus w j times gradient 1 t plus 2 v i d t over d 2 and you can see that this point yes, yes, yes, yes, exactly so this is the I did not I wrote it sorry yes, yes, yes exactly let me write it more seriously so here I used only the v j I did not use the w j sorry for the so we get d t over so I start with the d over d j so it's d t over d gradient 1 plus 2 v j d t over d 2 and then minus d over w j so this is minus gradient 1 t minus 2 w w j d t over d 2 OK? and the same exchanging i and j so like this OK? cancel at this level and then the other part also cancels because as you can see you get v i minus w i times I mean he has got a product times v j minus w j and exactly the same in the line below OK? well so this is a good operator for cancelling functions but then you have to use it on the left hand side of the equation to see what appears at this level OK? so when you do this computation so if I compute now l i j applied to f of v f of w I get so v minus v i minus w i times so d over df over dj at point v times f of w minus df over dj at point w times f of v and exactly the same but exchanging i and j once again I hope that the notation is understandable by df over dj I mean the derivative of f respect to the variable number j, OK? and this I can rewrite as f of v times f of w times v i minus w i df over dj of v divided by djf divided by f of v minus the same at point w subtract the same term but exchanging i and j and I get exactly this quantity here so what is inside the bracket here I will now systematically denoted by q i j f of v and w sorry for the this notation which is a little complicated but the point is if I apply now this to this since f of v times f of w is equal to something which depends only on those two quantities we know that this quantity has to be zero we know now that this quantity is zero so let me project it here actually provided that f does not cancel I have just proven that starting from the entropy dissipation of Boltzmann this quantity here is equal to zero so it's quite coherent with the hypothesis let's say that the logarithm of f is as some irregularity basically I suppose that f does not cancel now let's look at this identity this is just the cross product between v minus w and gradient f over f of v minus gradient f over f of w ok those are just the components of the cross product and so to say that this quantity is equal to zero exactly amounts to say that the gradient f over f of v minus the same quantity at point w is parallel to v minus w for all v and w and this is one way to as we will see this is one way to characterize the Maxwellian functions of v that is only the Maxwellian functions of v satisfy this inequality sorry constraint actually you can check already that if f is a Maxwellian then gradient f over f is an affine function of v and so if you remove if you take this function at point v and at point w and you subtract it, you eliminate the constant term and you get something which is linear and it gives you exactly v minus w ok so this is clearly satisfied by Maxwellians and the point is to show that it characterizes Maxwellians but before going further and keeping on writing down the proof of Boltzmann let's just have a look at the entropy dissipation of Lando so this I also wrote down at the end of last lecture and if you remember so this is an integral over now two variables so this is written in R2 but actually it's true for any dimension and inside you have the product of f at point v and w you have this function psi which is the cross section so here it's called psi so in the theorem here if you want to show the second part you have to support that psi is strictly positive and if now you say let's try to prove the proposition in the case of the Lando equation then this means exactly that for all v and w this quantity here is equal to zero but now this is a quadratic form applied to the same vector ok and this quadratic form is represented by your matrix which is semi-definite positive so this exactly means that this quantity here has to be in the kernel of this matrix here but this matrix is a projector onto the orthogonal space to v minus w so this exactly means that this quantity has to be parallel to v minus w so actually if you start from the Lando equation you end up immediately at the same level here and as you can see the proof of this part of the proposition is just a consequence of the proof of the previous proposition because there is an intermediate step at which you sort of transform the computation on the Boltzmann kernel in the computation for the Lando kernel ok so from now on we will start from here and prove at the same time both parts of the proposition ok so let me first describe to you the proof due to Boltzmann so as I said basically you take as many derivatives as possible so at the end you need to know for example that log f is maybe c2 or c3 we already took one point when we use this differential operator and now we will take two more of them so what we know is now that this quantity this equality holds ok for all different indices i and j so let's try to take derivatives for example respect to the quantity vi here so let's do it I take the derivative respect to vi of this and so I get exactly the term which is here which you can recognize at this level ok then inside this term you have a dependence respect to v so when you take the derivative respect to vi you will end up with vi minus wi which comes out from here and here this term does not depend on v so you just take the derivative respect to vi of this one and gives you exactly the second derivative respect to i and j of log f this is the derivative respect to j of log f so you take one more derivative respect to vi ok so that's for this left hand side and now let's look at the right hand side so in the right hand side it's easier because this term does not depend on vi ok i is different from j to take the derivative of this one so you end up with vj minus wj which is not changed and here you take one more derivative respect to the i variable of log f so you end up with this ok so when you did that somehow you destroyed the symmetry between i and j because you took a derivative respect to vi so the natural thing to do then is to take a derivative respect to j and then publish the symmetry ok so let's first start to do it with respect to wj so you take the derivative respect to wj of this quantity here this term does not depend on w so it disappears in this term you have dj of log f of w you take the derivative respect to wj you get minus the djj w then this one does not depend on wj so it cancels and in this one the only dependence is through this wj here which gives you minus di i log f of v so what you end up with is this equation here which gives you a link between the second derivatives of log f at point w and point v and this is true for any v w i and j so the function of w which is equal to the function of v so both are constants so it already gives you that the second derivative respect to the same index of log f are constants moreover those constants are identical for different i and j ok so you have already one part of the Haitian matrix of log f and this tells you that Haitian matrix of log f is made out of constants which are identical on all the diagonal ok let's now come back to the equation here and use and take a derivative respect to w i so if you do that this term cancels this one becomes minus di j of log f of w in this one only this term has a contribution and it's minus di j log f of v and in this one there is no w i so it cancels so you end up just with this and here once again you have you get a link between the second derivatives of log f but this time which are taken with respect to different indices i and j ok so this gives you the fact that first this is a function of w, this is a function of v so both are constants and moreover now you have a minus here and a minus here so when you add the two constants it should be zero so the constant has to be zero so this exactly tells you non diagonal part of the ancient matrix of log f is made of zeros so at the end of the day what you have proven is that the ancient matrix of log f is just a constant times the identity and you agree with me that this is exactly the same as staying as log f is a polynomial like this and so f is a Maxwellian and this gives you this point of course actually I didn't say that it's positive yet but of course the minus gives a hint so you're right I mean with this proof here you end up with all possible Maxwellians also the ones which are increasing very fast so then the whole point is that if you suppose that log f let's say is a c2 you have to add some kind of hypothesis like for example log f is this c2 and bounded but you could think of many kind of bounded above yes for example so you have many different kind of hypothesis which helps you to remove the possibility of having badly shaped Maxwellians so as you can see basically both man worked by taking successive derivatives of the initial condition on f and so his proof is a priori valid provided that let's say log f is c2 you have to add an assumption which tells you that you can get only positive c here and then it looks like it's not really sufficient for let's say modern analysis in which you hope that in if you look at the solution of the Boltzmann equation you hope that f has a certain irregularity but asking for c2 is really a lot so there is another way of seeing all of this which consists in saying that all of this was done in the sense of distributions and if you look at this then you can see that basically that log f is somehow locally integrable and that's it and you can do it exactly with same computation but doing it in the sense of distribution and you end up with a proof which is a proof of Boltzmann which is actually a sort of modern proof I would say however let me take two minutes to explain the what is the goal now and if you remember all what we did on entropy method is at the end to get an entropy entropy dissipation estimate so a link between d of f and h of f which here is integral of f log f so in some sense when you wish to prove an inequality it's rather a good idea to start to check what is the case of equality in the inequality and sometimes it helps you a little to try to prove when you try to prove the inequality and so if you have a proof of the case of equality which is exactly what we are doing now which is very robust let's say by small changes in the equality here then you have a hope that you can transfer it in an inequality at the end if the proof is not robust then most probably you will not be able to transform it in a proof of the inequality at the end and what is not robust in analysis in general is taking derivatives basically it's robust only if you are in an analytic setting if not it's not a very good idea so the word point is to abandon the proof of Boltzmann and to try to find a proof which is more robust so more robust means you have to sort of throw away derivatives and instead take integrals which is rather natural notion in analysis I would say so it's what we will be doing now basically let me before I show you how this can be done let me conclude on this proposition here by saying that basically we have now the strict entropy structure because we have the following let's say graph of implications what we just proved is that the Boltzmann of f is equal to zero implies in fact that the entropy dissipation of Lando is equal to zero and this implies in term that let's do it like this this implies in term that f is a Maxwellian this is just what we did in this proof but we already saw that f is a Maxwellian is let's say implying that the Boltzmann kernel and Lando kernel are equal to zero these were the remarks that we made in the previous talk so we know that if you look at the operator of Boltzmann and Lando we already have the two implications which are here was easy if you remember it consisted just in putting the Maxwellian in the kernels and checking that you get zero at the end and moreover if now you come back to the way the dissipation of entropies were computed you can see here that the entropy dissipation is integral of the kernel times something so it's obvious that you have an implication between those two things and also the same holds for Lando that is the dissipation of Lando is obtained by taking the kernel and integrating it against something so you have something like this so I hope it's not still understandable in this way we have made the totality of the circuit which shows that we have an entropy structure which is strict except showing that the entropy itself is the minimum of the possible entropies so for this let me write down again what is the entropy for both Boltzmann and Lando equation it is the integral of f log f and now if you take f which has mass, momentum and energy which are given and remember that those quantities are conserved in the flow of the Boltzmann and the Lando equation it's what we saw last time if you take now the infimum of h when those quantity are given so you suppose that you have mass, momentum and energy which are given and you take the infimum it's clear that the Euler Lagrange equation related to that is just that the derivative of this with respect to f that is log f plus 1 is equal to the Lagrange multiplier's theory the constant times 1 plus another constant times v plus another constant times v2 over 2 so let's call it a prime plus b dot v minus c v2 or v2 over 2 and up to changing a prime in a prime minus 1 which becomes a we get again that f is a Maxwellian so if f is satisfying the infimum of the entropy provided that the conserved quantity are given we get again the Maxwellian which is the very last part of the proof of the strict entropy structure for both Boltzmann and Lando equations so now in order to introduce the very last part of the lectures which is in some sense the most modern part because all what I showed up to now has been known for a long time the point is to find a robust proof of this part here and more precisely because I'm speaking strictly of the Lando equation now of the fact that the entropy dissipation of Lando leads to f equal to a Maxwellian and we need a robust proof which hopefully will give you an inequality at the end so let's try to do that so if you remember the starting point when you look at the Lando entropy dissipation is that the quantity which is here and that we called q i j f of v and w is equal to zero for all i j v and w it's what is written actually here so we have to know that this quantity is equal to zero and we want to show that f is a Maxwellian out of this what we did previously is consisted in taking derivatives respect to various quantities in this quantity so we took basically two derivatives and it worked now we don't want derivatives anymore because we want something robust so the first thing to do is to take this quantity here which is rather beautiful because it's completely symmetric and to actually couple together the terms which depends only on v so it gives this part here the terms in which you only have w which gives you this part here which are mixed so you have part with w and v and the same with v and w and you put them in different places ok but it's exactly the same so I'm just saying that this quantity is equal to zero and the point is that there is actually a transform which starts from q equal this and which gives you the gradient of log f of v in terms of this quantity so the point is you know q in terms of this quantity and you want to do exactly the reverse you want to write this quantity in terms of q so actually the formula is written here so this is the formula which sort of invert the relation that you have above so I will try now let's to explain to you how we get this as you will see it's not very difficult once you know what to do we start from here q is equal to this so the first thing I do is that I multiply this q q ij by 1 and I take the integral with respect to w only so when I do this it becomes the same so let me write it like this ok so this is for this part here and sorry I multiply here by f of w so here I will get the integral of f of w dw ok so this comes from the first term here then I look at the term which is here I multiply by f of w and I integrate over w since this is a derivative dgf it gives you zero so this one disappears of course it's the same for this one because you integrate so you just get the integral of dif and it gives you zero then let's look at this one this one gives you minus dif over f of v times the integral of f of w w i dw ok and this one gives you the same but with the plus sign and you change the i in j and finally for this one remember that you multiply by f of w and you integrate so for example here it will give you an integral sorry of w times dgf you do the integration by part and you will get that you have dji times f and dji is zero so you get so zero and of course it's the same here because i and j can be exchanged so the last one does not give you any contribution so I end up exactly with the formula which is written here ok and as you can see this is a formula which links actually dif over f and this complicated quantity here is sort of a component of v cross product with gradient f over f ok so next step and this time I take my paper because I don't want to make any mistakes at this point this time I do the same but using now as a multiplicator not only f of w but f of w times wi wi being the i component of w so when I do that I try to I try to do the same so this term still gives me ok so this one is easy now next one I multiply by wi I integrate and by f of w the dj of wi is equal to zero so this one is zero but next one corresponds to multiplying by wi f of w so the f of w can solve and I get wi dif which after integration by part will give me exactly minus integral of f so here I will get minus dif sorry minus dj dif over f integral of f of w dw it's the one which is coming from this term here so next one I multiply by w and by f of w so I get actually it's better to put it here so I get minus djf over f of v times the integral of f of w wi square dw for this one I multiply by wi f of w so I will get little z so this one I put it here it's really better to do it this way I transferred it here sorry, I remove it and I will write the term coming out of here at this level here so it will give plus dif over f of v integral of f of w wi wj dw and then one has to treat the last term here so in this one I multiply by wi f and so I will get here wi to the square but once I integrate by part with respect to the j derivative it gives you zero so this one disappears and the last one corresponds to multiplying by wi here I do the integration by part so I will get plus wjf times nothing so let me check I really did the computation and just I didn't look at my notes so let me check that it's okay yeah that's it very proud so now I do the last one and the last one consists in doing exactly the same using now wj instead of wi so here of course you can do the same computation as previously but you can also notice that i and j can be exchanged and it helps you a little to write down the good formula so let's do it this way so I will get now wi djf over f of v like this times the integral of f of wjdw then there is a term with df over f which is the integral of f of w wj square which corresponds actually in this one in the other line is plus diyf over f of v times the integral of f of wj square w so this one corresponds to this one sorry, I wrote 2 times the same so here it's the term in dj there is a minus and there is wi wj here and for the final terms it's quite easy you get vi integral of f of wjdw and minus integral of f of wjdw i dw so basically you take this one you exchange i and j and you change the signs and you get it well anyway the whole point here is actually a system in which the unknowns are now indicated here in brackets so actually it's better to to use as unknown minus e.g. f so like this so in the brackets you have the unknown of a three times the quantity you want to compute which is di f over f but also the same quantity for a different index and this complicated quantity here which is v cross product gradient f so once you are here you just have to solve and to solve it you just use Kramers formula when is the determinant ok so now if you look at the formula here this is just in writing Kramers formula for this system and you end up exactly with this so for example you can check the determinant which is here which is composed really on those columns here that you can see so f times 1 w i w j w i w i w j w j square and the same here and it's exactly what you will end up with in the system the upper part is a little more complicated because you have to write down those terms here and also the terms which are here as a right hand side for the system so this is the part which appear in the column here because you are really computing the secant unknown in the system so the secant column consists of those things ok so now you have done what you needed to do that is to transform convert this formula here now you have d i f over f in terms of the rest and the case of equality that you wish now to show consists in supposing that all the q i j f the q i j f is equal to zero so if you rewrite it now if you take q i j f equal to zero you end up with the formula here and as you can see here you have a determinant in which v does not appear so this is a constant and at this level you have v which is appearing in the secant column non-ly so that this is really an affine function of v so you exactly ended up with the fact that the gradient f over f of v is an affine function of v which is of course equivalent to the fact that f is a Maxwellian one has to be a little careful here actually I proved that f is a Gaussian to be precise and f is a Maxwellian so what is the interest of doing this with respect to the proof of Boltzmann it is that as you can see I use no derivatives here this was done only by integrating and so hopefully this is more robust actually it is it is really much more robust than the proof of Boltzmann I would like to end this lecture by one small remark which is that I did as a bad student I did not check that what is under the division is not zero so one has to be a little careful here and one has to check that this quantity is not zero actually this quantity can be seen as a grad determinant but if you don't want to use that you can just think of what it means this determinant is zero exactly if those three columns are sort of let's say linearly dependent with respect to the measure f of wdw so this will be equal to zero exactly when f is concentrated on a hyper plane made of with an equation for the linear dependency of the three columns here so this is zero only if f is a direct mass on a hyper plane to be clear as soon as f is a function let's say l1 this cannot be equal to zero I hope it's clear completely sure I hope it's clear and so now this is enough for the proof for the case of equality but if you think then to a possible proof of an inequality you will have to estimate this determinant here from below you will have to show that this is strictly positive provided that you are in the right set of functions f and what is the right set of functions f but those functions which typically have a finite entropy that is such that integral of f log kf is bounded and what integral of f log f bounded provides to you is exactly let's say a quantitative bound on the fact that f does not concentrate on sets of measures zero like for example hyper planes so this is controlled by the fact that f has an entropy which is bounded which is given if you wish so that we are rather in a good shape from this point of view I think maybe it's a good time to stop the lecture and we'll see the consequences this afternoon