 In tukaj vse je tukaj drugi tukaj, in tukaj vse je, da je loranda Svidé z Universitetem Paris Diderot na kolizijstvu v Plasma delandava. Zdaj sem tukaj tukaj vse. Zdaj sem tukaj tukaj mikrofon. Ok. Ok. Kaj sem glad, da je tudi tudi, či to je vse vsak treba, in tudi srvi tudi, ali v latih tudi, tako zelo sem sem nekaj ni vrst v mnohi tudi, tako, da sem si vrst vrst vrst, ali sem je vrst nekaj ni vrst, Tako, imam veliko pravdu na organizacijerji, da sem tukaj vse oportiniti. Prezettam te 5 tukaj, na to, da je tukaj tukaj tukaj, tukaj tukaj je tukaj, da je tukaj, da je tukaj, da je tukaj tukaj, It was written down first by a left in the thirties, and let's say the mathematical study of the equation, started in the late sixties and in the seventies. And it's been since a very active topic in mathematics, and actually there are really two main directions of the equations. in včešče, tudi je tudi dve rečenje. Prvom je, da je tudi izgleda smutnega solučenja in je to več vsočenja. Prvom je, da je izgleda izgleda izgleda, nekaj je vsočenja. In tudi je vsočenja in vsočenja. So my series of talks will be rather devoted to the question of the large time behavior. And so because of that, the first talk, what I will present today, is really more related on the sort of general structure of the equations in which varies an entropy, as it is a case for the Landau equation. For this first talk, we will not hear at all about the Lando equation and it will be introduced actually in the second talk. At the end of the second talk. This first talk is really devoted to the general structure of equations having an entropy. Let me give a few standard definitions. We will consider a Cauchy problem. Do which I mean that we have an equation which writes like this. So you have, let's say at the abstract level, an equation in which you have an unknown, which I call u, which depends on time. And the Cauchy problem is given by the derivative in time of u is equal to some operator acting on u. vse operator je nekaj linijer in to je všeča in vseč vseča. Vsi je več vseča, ki bi spet všeč všeče. In posledaj smo in inisiali datan. Tak jaz sem pa ne da. I bi se da se vseča in se vseči vseči vseči vseči vsiči vseči vseči vseči vseči vseči. Prvoj definiton je definiton, kaj je liapunov funktionalni, kaj, kako je, nekaj, in antropi. Zato, da je antropi meni meni, nekaj ne zelo, da ne zelo, kaj je liapunov funktionalni. Vse bovalo je vse odrstavilo na dalek. Zelo ga se prišli, da svačš nekaj precedači enrč njega mi je nepunejo. Odrstavo je tudi več nekaj nekaj, vzelo je, da da imamo što energija, nekaj, finačno, što tako je njega pošli, da ga bi nekaj odrstavila, of Boltzmann that we will see in the sequel. And this is a function which goes from the functional space E to R. And we say that it is a Lyapunov functional when we take the derivative respect to time of H of U of T, then we get something which has a sign. So in that case, which is done positive. So this is a function such that for any solution U of the equation X, the quantity D over DT of H of U of T is non-positive. And when it is so, we will use the notation minus D of U of T for this quantity here. And the D here is for dissipation. This is the so-called dissipation of the entropy H. So D is called dissipation. So this is a situation which is quite common in physics and in many other fields. And let me give a very simple example consisting of a system of two differential equations. So suppose that you have the system X prime is equal to minus Y and Y prime is equal to minus Y. And you take initial data, which are respectively in R and in R plus, like this. Then you will call the functional space will just be R times R plus. And one can easily check that starting with an initial datum, which is non-negative here, you always remain because of the differential equation in the space of non-negative real numbers. So you can really select E like that. And for example, for H you just take H of X and Y is equal to X. And it's immediate to see that if you select solutions of the differential system, you get here minus Y of T. And since Y is non-negative, minus Y is non-positive, and you get this. And so here D of X and Y is defined just as Y. However, this is a typical example in which it's not really useful to write down this Lyapunov functional. And the reason for that is that typically here you get an entropy or a Lyapunov functional, which is not bounded below. And when it's like that, then usually it is not very helpful to consider this entropy. And because of this, one introduces a new notation for the Lyapunov functionals, which are, that we will call strict, in the sense that they are bounded below. And at the point where the entropy reaches its minimum, we will suppose that we have an entropy dissipation, which is equal to zero, and which also corresponds to equilibria of the equation. So when we are in such a situation, we will call the entropy strict. So let me write this. So this is, let's say, definition two, this one being definition one. So we still consider the equation, which is written here. And when H is an entropy for equation H, it is said to be strict. When the following properties are satisfied. First, there exists a unique equilibrium, which I will call u infinity for equation x. So this means exactly that a u infinity is equal to zero. And I will systematically call it equilibrium. And we will suppose more over the following. So d of u is equal to zero if and only if u is the equilibrium. And also h of u infinity is strictly less than h of u for all u different from u infinity. So u infinity realizes the minimum of h. Another way of saying this, if you prefer, is to say that a u is equal to zero if and only if u is u infinity. Ok, this is the same as saying that there is a unique equilibrium. So this is a situation that we will systematically consider in the rest of the lectures. So we will systematically consider equations in which there is a strict entropy structure like this. So let me give, yes, of course it's possible also to have a, which depends on t in some situations. But in what I will present it will be systematic. So let me give one example which is more interesting than this one. So this one as you can see cannot be strict because basically h is not bounded below. So let me give this second example and it will still be a system of two ordinary differential equations. But a system which is a little more complicated than the previous one. So let call it c2 and it's given by the following equations. We will suppose that we have initial data which are both just real numbers. So here the set e, the functional space that we consider is just r squared. And the natural Lyapunov functional is just x squared plus y squared. So we take x squared plus y squared. So let's do the computation, this quantity. So let's suppose that x and y are solutions to the system c2. As you can see, you get x prime of t. I will use this notation, this is the derivative respect to the first variable of h. So I just use the chain rule at this level. And here I observe that dh over d1 is just 2x and dh over d2 is just 2y. So I just get 2xx prime plus 2yy prime. As you can see, the structure of the system is such that when you multiply by x, you get x squared. You multiply by y, you get y squared, then you add up. And you end up with 1 plus x squared plus y squared, like this, times x squared plus y squared. And there is a factor 2 in front if I'm not mistaken. And, of course, this quantity is non-positive. And you can define the associated entropy dissipation using this formula here. And what you get is that d of x and y, the dissipation, d of x and y is just 2 times 1 plus x squared plus y squared times x squared plus y squared. Like this. And now we can check that this is a strict entropy structure. So we have to verify both assumptions here. So here, let's first look at the equilibria of the differential system. As you can see, this will be equal to zero plus this will be equal to zero only if both x and y are equal to zero. So in this situation, u infinity, which I will write x infinity, y infinity, like this, is just zero, zero. And this is a unique equilibrium. Moreover, it's clear that d of x and y is equal to zero if and only if x equals y equals zero. So this is true. And finally, let's look at h of x and y. As you can see, h of x and y is strictly bigger than h of zero, zero, which is equal to zero if and only if x, y is different from zero, zero. So here it's extremely easy to check that you have a strict entropy structure for this set of two differential equations. Once again, looking at this example here, it's very easy to see that here you don't have a strict entropy structure. Now there is a general principle, which is sometimes called LASAL principle, which tells that, provided that you have a certain number of conditions which are required, when you have a strict entropy structure, you expect that if you look at the equation and you let t go to infinity, then you will converge towards the unique equilibrium. And this can be made as a theorem provided that you put the right assumptions, especially assumptions related to compactness. But I will not go in this direction today because what I'm really interested in is effective explicit estimates for the speed of convergence in such situations. So I will now, because of this, give a third definition, which is in some sense even more stringent than the definition for the strict entropy structure. And what I will write now is sometimes called an entropy-entropy dissipation structure. Write it here. So this is the third definition, the one which actually will systematically be satisfied in the equation that we will present in the rest of the talks. So we still consider an equation of the general form, which was written at the beginning. And we suppose that it is associated to a strict entropy structure according to the second definition. So this means that we have a unique equilibrium, an entropy, and an entropy dissipation satisfying the constraints which are here. So we have at our disposal u infinity h and d. We say that an entropy-entropy dissipation and for the rest of the talks, I will call this EED. I will not write again entropy-entropy dissipation. So we say that an entropy-entropy dissipation estimate holds. And sometimes it's called a quantitative entropy estimate, which is actually a vocabulary that I prefer, but it's less common than the vocabulary which is used here. So when one can find a strictly positive number, which I will call C0, such that for all u in the functional space E, the entropy dissipation is bigger than the constant C0 times the entropy minus the entropy of the equilibrium. So when this holds, we call it usually EED, like I said previously. So it's before commenting on this, or actually commenting on this, I would like to say that this is not a priori crazy in the sense that because of the strict entropy structure, we know that h of u is bigger than h of u infinity. So this quantity anyway is non-negative and actually strictly positive except when u is equal to u infinity. And d of u basically satisfies the same constraint because of the definition. That is, d of u is equal to 0 only when u is equal to u infinity. So there is some hope that such, by the way, does everybody see here? Yeah, it's okay. So really because this is the most important sentence here. So such an inequality is not completely absurd a priori. And the second comment is that as you can see here, in some sense we have completely forgotten about the initial equation. But it's from the initial equation what will retain is only those parameters here, which are related to the equation, the u infinity, the entropy and the entropy dissipation. But the solution of the equation does not appear anymore here. So to check something like this has to do, in general, let's say, with functional analysis. It's trying to prove an inequality between functions, which are not directly related to the original equation. Okay, there is some link, but it's not completely direct. Why do we introduce this definition here? It is because there is an almost immediate proposition that we can write down when we have such a situation and that I will prove immediately. So the proposition tells us that, let's say, when an equation gives rise to an entropy dissipation estimate. So it's exactly what we define here. Then the following large time behavior estimate on the solutions of the equation holds. Arch time, behavior, the solutions hold. So this estimate is the following. For all time non-negative, it's possible to bound the entropy at time t minus the entropy of the equilibrium. Remember that this is a non-negative quantity because we are looking to an equation with a strict entropy structure. And here I will have the difference of the entropy at time zero and the entropy of equilibrium times exponential minus c0 t. C0 being the constant, which appears in the inequality EED. So as you can see, this is something which tells you that the solutions of the original problem are converging towards the equilibrium in a very specific way. That is, it is the entropy of the solution at time t, which converges exponentially fast towards the entropy of the equilibrium. And we will see later that in many situations, this is enough, in fact, to get an explicit estimate for the solution itself. But let's first concentrate on this. So the proof is very easy. It is just a direct application of Grenoval's lemma. Indeed, thanks to the entropy structure, we already know that if I compute minus the derivative of this quantity here, this is of course the same as computing minus d over dt of h of u of t, because this one does not depend on t. And so this gives me exactly what I called d of u of t. So this is the entropy structure. And then I use at point u of t the general inequality that I know for any u in E. And this tells me that this is bigger than C0 times h of u of t minus h of u infinity. Like this. So as you can see, I'm exactly in the situation in which Grenoval's lemma holds. That is, I have minus d over dt of something is bigger than C0 times something. And I deduce from this the inequality above. So we conclude thanks to Grenoval's. This is kind of a general tool for getting explicit estimates for the large time behavior of any kind of equations provided that there is a dissipative structure represented by an entropy, which is strict and which can be sort of quantitatively estimated respect to its dissipation. Let me first come back to example 2 to show how it works. So for example 1 there is of course no hope because the entropy structure is not strict, but for example 2, which is still written here, let's try to see what happens. Now my dissipation of entropy is 2 times 1 plus x plus y square times x square plus y square. And I have to compare it with h of u minus h of u infinity, which here is just x square plus y square minus the same quantity taken at point x equals 0 and y equals 0. So it's exactly x square plus h plus y square. So as you can see, it is immediate to check that this quantity is bigger than 1. And so I can apply the EED inequality with c0 equal to 2. And I did use from this immediately, so as a consequence I can use the inequality which is here. And what I get is that x of t to the square plus y of t to the square is less than the same quantity at point 0, at times 0, sorry, times exponential minus 2t. And as you can see in this specific example, I get really something directly on the solution because now if I take the square root of this, this is really the natural distance, the L2 distance, or the 2 distance if you wish in finite dimension towards 0, and it tells you that it will be like exponential minus t. So you directly get the large time behavior, the quantitative large time behavior of the quantity here by using a tool which at some point does not refer explicitly to the equation. So in some sense this is a method which enables you to sort of forget a little about the equation and try to get things out of estimates which are directly written on numbers if you are in finite dimension like for ODEs, or functions if you are in infinite dimensional setting like you have in PDEs or integral equations. So let me now, since I want really to devote most of my time to the Lando equation, let me write down first an equation which looks somewhat like the Lando equation but which is much simpler, in fact it is linear and in which it is really possible to use all this machinery. So this equation is a standard Fokker-Planck equation which is something which has been written by physicists for many, many years. It was probably written first already in the 19th century. This is next example and this will be our first example in infinite dimension that is outside of the world of ODEs. Let's now suppose that you have a function F and I will write the variable V because it's typically a velocity which is the variable here in this function. This will be a density so it will always be non-negative and we will consider that let's say the natural setting will be the space L1 but I will not write it immediately. Let me just say that here V is an element of Rn so this can be written in any dimension basically. The operator A that was written down in the general abstract equation in that case it will be a differential operator which can be written in the following way. So this is a gradient with respect to the variable V here so more precisely this is the divergence of gradient F plus VF. F as n variable, I write down the gradient, here I multiply F by V so I also get a vector here so I have here an n dimensional vector and then I take the divergence of it. So this is the so-called Fokker-Planck operator and as you can see this is actually close to the Laplacian. This would be the Laplacian and the main difference is that you have this drift operator which is added to the Laplacian. If you had some training in probability maybe you saw the same operator called Orstein-Ulenbeck. Though Orstein-Ulenbeck is rather used for the semi-group related to the operator, let's say. Well anyway this is the same object. And so the first thing to observe is that this operator actually preserves positivity in the sense more precisely its semi-group preserves positivity in the sense that if now you look at the solution of the equation dF over dt equal af then and let's say F of 0 and V is given then F of t and V is positive for all time if the same is true at time 0. This can easily be seen because you, so the first term is Laplacian so the second order derivative you know that if you look let's say at the minimum at this point you will have a term which is non-negative and so you cannot go below the minimum and for the second part it's also easy to see. You can just see it by developing the divergence of Vf in two terms. So this is a property which can easily be seen at the formal level. Moreover you have a second property which is related to the fact that since this is a divergence if you integrate you will get 0 and so you will propagate the L1 norm of F. The integral of F of t and V if F is the solution of the equation will be equal to the same quantity at time 0. So because of this the natural space in which you want to work is the space E of functions F depending on V such that F is non-negative and the integral of F is a given number. So let's say 1 but you could put of course any other number. So this is a natural space E in which you want to study the equation here which is a Fokar-Planck equation. So the entropy or let's say the Lyapunov functional which is related to this equation has been known for a long time and this is actually what in physics we would rather call the free energy. So let me write it, so this is a free energy and this is defined as the integral of F log F like this plus F V square over 2. So if you are familiar with the entropy in kinetic theory here you recognize the traditional entropy and you add to this entropy the kinetic energy and when you do that you get exactly what is usually called free energy in physics. So let's check that H is indeed a Lyapunov functional an entropy for this equation here. So let F be a solution of the Fokar-Planck equation so this equation here. And let's compute the derivative of H of F. So as you can see the first part here you take the derivative in time of F log F this gives you log F plus 1 times dF over dt and the second part here it just gives you the V square over 2. So you can write it like this and you can replace dF over dt by its value which is af which is this divergence here. So you can rewrite it log F plus 1 plus V square over 2 times the divergence of gradient F plus VF. Now you do an integration by part. You assume that there is no problem with the boundaries because basically you have taken a setting in which you are in L1. So doing the integration by parts gives you here the gradient of this quantity which is nothing but gradient F over F coming out of this plus V coming out of V square over 2. So this is times gradient F plus VF dV. And here you recognize that you have two terms which are proportional, the coefficient of proportionality being F and so it's naturally something which is non-positive. Let me write it maybe here because it can be, I hope it can still be seen at this level. So this can be written as the integral of F times gradient F over F plus V to the square dV and this quantity is sometimes called the relative feature information for F so we call it just minus d of F and it's clearly something which is non-positive. So in this situation we now have an entropy structure and we have to check first that this entropy structure is tricked and then that we have hopefully an entropy-entropy dissipation estimate. So let's first check that the entropy structure is tricked so let's look at what happens when AF is equal to zero as we could see this is AF, this quantity here. So if AF is equal to zero then it's clear that d of F is equal to zero because d of F is just the divergence of this term multiplied by something and then integrated. So AF equal to zero implies d of F is equal to zero but now d of F is the integral of something which is non-negative so it will be equal to zero if and only if this quantity here will be equal to zero. So as you can see you will get up to points where F is equal to zero to gradient F over F plus V is equal to zero, F is equal to zero. And now as you solve this equation you get that F is of the form constant time exponential minus V square over two because you just have that the gradient of log of F is minus V so when you take the primitive you get that the log of F is like minus V square over two plus a constant and then you take the exponential and actually the constant can be computed because we suppose that F has integral one so actually the computation of the constant tells you that this is square root over two P to the n which can be computed easily thanks to the Gauss integral. So this thing is what I defined previously at the equilibrium that is F infinity. So as you can see the equilibrium is exactly a Gaussian function of V which is actually centered and reduced. We are in a good situation for showing the strict entropy structure because now it's clear that if you take an F infinity equal to this then AF is equal to zero and also DF is equal to zero. So this also implies those two things here. And the last thing we have to check is that H is at its minimum only when F is equal to the Gaussian. So remember that this is under the constraint that the integral of F is equal to one. So if you now use the Euler Lagrange theory at this level taking the derivative of this you get exactly log F plus one plus V square over two and this has to be equal to some constant time one So what you get is exactly that log F plus V square over two is a constant and this is exactly the same as saying that F is equal to F infinity. So a very easy computation let's say of variational calculus tells you that I don't know if it's really the English word for this but in French it's calcul de variation but anyway, so this gives you that H of F is sorry, H of F infinity is less than H of F for all F in E except of course F infinity. So we have the strict entropy structure in this Fokker-Planck example. Now what about the entropy-entropy dissipation estimate? As you see what we would like to get what we would like to get is that D of F which is if you remember the integral on Rn of F times gradient F over F plus V like this is it bigger than some constant C0 times H of F minus H of F infinity which is defined as the integral of F log F plus F V square over two minus H of F infinity which hopefully has been computed on my slide and which anyway should be minus N over 2 logarithm of 2 pi so here we should add N over 2 logarithm of 2 pi this is the entropy, the free energy for F equal to this so the question becomes is it true that for any F in E so any F positive such that the integral is equal to 1 can we show that there exist a constant which satisfies this so actually it happens that the answer has been known for now many years and this is exactly the logarithmic sobolef inequality of growth which I think was first obtained in the mid-seventies so it's an inequality which is known actually the proof is very interesting and I would also like to add the name of Giuseppe Toscani who in 94 first identified from the in the vocabulary of kinetic theory the use of this logarithmic sobolef inequality for the problem and as a consequence we obtained that this quantity decreases exponentially fast and actually the C0 is known so let me so the answer is yes and the number is 2 and so thanks to this it's possible to show that if you look at the solution of the Fokor-Planck equation then this quantity here decays exponentially fast actually with an explicit rate which is known which is 2 towards 0 so this is actually the example which is probably in terms of linear equations the closest to what will be presented then for the Landau equation but actually I would like to give a last example not sure that I will have time to finish it today but we can finish it tomorrow anyway it's called example 4 and this is an example which comes from the theory of reaction diffusion equations I would like to present it especially because I think that Alexi Vassar will present a talk on a model which is quite close to what I will now present so this is reaction diffusion theory and more precisely this is reaction diffusion coming out of reversible chemistry so the situation is the following you imagine that you have two chemical species which are called A and B and A and B can transform one into another reversibly and somehow the speed of the reaction depends on the catalyst which is spread in the chemical reactor but not uniformly so you denote by k of x the concentration of a catalyst of this reaction so what is the typical equation you want to write down so you call A and B the concentrations of A and B and those are actually functions of time and space x will live in a domain which is the chemical reactor so this is let's say included in Rn let's say it is bounded and smooth while t is just a non-negative number which represents time so what is the typical equation that one expects for A and B well first one takes into account the reaction so here you have let's say a linear kinetic reaction because you have no extra species entering in the chemical reaction so here you expect to have something which is proportional to B minus A and here the same with A minus B because I mean you will get A out of B and A will disappear when it transforms in B and of course a reverse for the secant equation and in front you will put something which is let's say proportional but let's write it directly like k of x so this is something which is proportional to the concentration of the catalysts so of course a crude approximation but let's say it's not absurd so this is for the reaction part and then you take into account the fact that the concentration of the species A and B they will diffuse in the reactor and actually they will diffuse but not not necessarily with the same diffusion rate because A and B for example can be molecules which have a different size or a different mass and because of this the diffusion in the reactor is typically something which is complicated and there is no reason why D1 and D2 should be equal and it's important here to take it different in order to get something which is mathematically interesting of course if you take the same basically you can subtract and add the equation and then it's extremely easy but if you take it different the mathematical theory is a little more intricate ok so I think I will finish with this let me just add let me just add the boundary condition which is here typically Neumann boundary condition which corresponds to the fact that you expect the species to remain inside the reactor they do not exceed the reactor and so you add those two things I think I will stop here and we will check that this satisfies the all the assumptions previously written down tomorrow thanks a lot