 Stay much. Please, Alexis. Thank you. OK, so first, let me thank Francesco for the invitation. And it is a great pleasure to be here. And let me thank you for still coming for the last day of the week, Friday afternoon. So I will try to not be too much boring. And I will talk about regularity for what is called four-spaces reaction diffusion system. And the most important thing, so this is a work with Thierry Goudon in French, Goudon and Caputo. OK, so what is this system? So this is the system I want to consider is the following. So you have four-spaces. So it's close to the systems that have been introduced by Laurent de Villette at the beginning of this week. But here I consider the special one when you have actually four-spaces. And this is your two-spaces A1, A3, which can be transformed in A2 and A4. So I just choose to put the odd here and even there. So that's the thing. OK. And then if you consider this, then you have the ai, i for 1 to 4, which correspond to the mass concentration of the spaces ai. And then let me just put the system there. So you have a system. So for i between 1 and 4, actually for 4, I could put them all four, but you will see there is a very nice structure. So it's pretty easy. And this is dt ai minus di Laplacian ai equal qi of A, which means it depends of A1, A2, A3, A4. And so it's four equations. The di are four numbers usually different from each other. This is a diffusion of each space. And then I have to give the qi. And qi of A is even like this. A1, I put should be A2, A4 minus A1, A3. So that's it. That's the system. So you see it means that when you have A1, A2, it has exactly the same reaction. This is A2, A4 minus A1, A3. And when you look at A2, A4, this is the same one, but this is now the opposite. And this is A1, A2 minus A3, A4. OK, so of course this is one specific case when you can consider more general case when you have p spaces. I will try to explain to you one how interesting especially this one. Because you can ask me what is that. So you can consider of course the general case when you consider p spaces or a more complex chemistry setting. And then I mean when you have your qi, I mean usually they have this form. They can be written as minus mu i minus mu i. It will be the product from i equal one to p now of the ai at the power mu i minus the product from i equal one to p of the i at the power mu i. This is the general setting. You have a lot of spaces now. They interact and basically the mu i. So this is the one when you have a production and this is the loss term for each mu i. And what I should say like this is of course now there is, if you want to see the order of the system. So here it's quadratic because here you have, I mean it's clearly quadratic. And if you want to see the power of the things what you have to look at is of course for the one, the important one is the one which is the production and this is this number, the sum from y equal one to p of the new i. The number of ai that you have here. I should say that when you have simple chemistries and usually the new ai are integers but actually it makes sense and sometimes they have also new ai just positive numbers. So it makes sense to look at them for all of them. And then well for people working in this there was a very nice paper which is due to Michel Pierre. Pierre is a name, right? And Schmitz 2000. And basically it shows that whenever you consider, whenever you consider a system such that this of the new i is bigger than two, strictly bigger than two, then you can find a dimension, so n I should have said because I'm looking this is for x in Rn, right? So it's a dimension, so usually it's R3 which you consider, right? And in the end, then you can find a n and an initial value such that you have finite blow up, finite time blow up. So in some way the case, and if you put too much reaction then you can have blow up and then the question of what's happened, so in some way it seems that the two, if you want to have something which is true for all dimension, the two seems to be a limit case, right? And actually for this case, this is the case when we have the theorem to first and let's give a name for this system here and we claim that for any n, so any dimension, any initial value, there exists a in C infinity or 0 plus infinity or n, unique solution 1. So what we show is that actually it is true that if you consider just the quadratic case, then the quadratic case for the quadratic case for any dimension, right, you will have a global solution. Actually this is slightly more than that. I've decided to really present everything on this because it will be simpler, right? But actually what we show is that there exists a number which is explicit, right? I mean I cannot remember it, but there exists a number which depends on n, positive, right, such that if this sum i equal 1 to p of the new i is smaller than 2 plus epsilon n, then the theorem holds, right? But of course this epsilon n converges to 0 when n converges to plus infinity, right, which does not violate this condition. So, yeah, this is a vector, yeah. So this is, in this case, this is ai1. I mean there's four components, right? So you have four components, this is a four initial component. You take them bounded in n1. Actually in LP, p big enough is dependent on n, and in n1, right? And then you can show that you have a global smooth solution. Oh. Exactly. When it's smooth, it's unique. So basically this result of Pianz-Schmidt killed the wheel in this area to try to do things. And before 2006, actually the only result which was known it was for the three spaces, right? When you have a1, a2 plus a3. And why these things is really easier? Because it is linear with respect to a1, right? So you can have a maximum principle with respect to a1. There is no maximum principle in general for the system. You cannot use all these kind of tools. So that's why this is, it was better. And basically that was it. There was nothing new about that. And then there is David Lett and Felner who are going to be interested in this and especially there was a result with Pierre and Vauvel. Basically they showed that in the case, so now I'm talking about really this case, the quadratic case. So this is quadratic case, right? I mean for 1, for this system, and n equal to, and you will see that the dimension 2 is actually very special, then you have a weak solution, global weak solution. So even global weak solution was not known before. And in some ways it's through these papers that they brought all these kind of problems into our community. For every time. Yeah, exactly. Exactly, yeah. So this means, yeah, it's not a small time problem. Okay, and from there, so from there there was a series of results. So then, I mean actually hearing from the results in a conference like this, I get interested into this kind of problem. I like it very much. And we began to work with Thierry, and we did, so it was in 2010, we began to put the methods that we will consider in place. And we did so global regularity first for n equal to again, n equal to is pretty important. Then there was again, David Lettenfeldner, so DF will be David Lettenfeldner, this other team, who get the same with other techniques. And actually, I'm cheating here. I say after, I put mine before, even if it's 10. I mean we put it actually, they get the result after, but this one was published in the Annals of the Economic Superior. And this is a very nice journal. I'm very happy to have it here, but it takes some time to publish, I mean to get things done. And finally, after, so again with Thierry, actually with Caputot, so it's where Caputot came into, so let me, Caputot, that's her. We were able to do it and you will understand also why this is easier. So global regularity for when the sum, when the sum of the new i is strictly smaller than two. So this is everything below the two, even when they are just positive. But we miss the two, and just to be completely complete. So this is then again, David Lettenfeldner, so they get the result for any n, but on the smallest condition on the di, and I will try also to show why it's very natural. So if the di are very small, I mean very close to each other, up to a certain epsilon n, then they are able to get the full regularity and basically now we are able to do it and without any condition of the di. So why is there interesting all these? So the techniques that we use, so it's really two teams, right? And we don't use at all the same kind of techniques. And they were also interesting and there's another series of work and you won't be surprised by it. Actually they developed also, so David Lettenfeldner, a theory in order to look at all the system for a long time, and what they get is exponential in time. And this is using the relative entropy method. So this actually is really using all the techniques that Laurent explained this week, actually showed it on the system in the linear case. If you remember he said that on the linear case you can use any entropy and it is the L2. When you're in the system case, you cannot use the L2, you really have to use the entropy that I would present, the L log E, that is the L log L. And on top of it is very important to have the L infinity control, so that's why they were actually interesting kind of regularity things. Because in order to get this, you need to control the L infinity norm and show that it does not go more than a polynomial. And when you have that, then you can put all the methodologies that you can use for this system and get this kind of result. So that's why they were interesting. Okay, so the first thing I should say was the use. And the starting point, and this is pretty natural for them, was the result which was actually here and let's say it's a dual argument which was in this paper and they use it to the basis of all their theory and it's working on the total density of Tx, which is a sum from O equal 1 to, let's say P at this point, A i. So this is the sum of all the A i. And by a structure, I mean as Laurent said, last time actually you have a conservation of all the mass, right, when you have this transformation. So when you sum up all the Q i, actually they disappear, right? So you can just sum up these things, these disappear and you look at what you have and you have an equation of the form dT rho plus, well, plus laplacian of the sum of the d I A i and I will try, I will work, I mean write it that way, is laplacian of something that I will write a d which depends on dT x rho equals 0 where what where the d by definition is the sum of what I have which is the dI A i over the sum of the A i, right? So in some way you have on this, so it depends on tx, you have no control on it except this, but you have a control on the ellipticity, right? Because you see that this d of tx will be between the smaller on the d and the bigger, right? So this is the soup of all the dI and this is the amp of all the dI. So this is a kind of, this is a kind of elliptic equation, right? So if all the dI are the same, then it's just the equation, right? And then everything is easy, is easy because then the rho are bounded and when everything is bounded, then it's most, right? So that's why this condition makes sense, right? If you are close to something which is almost constant, where constant is easy, so things will be easier, right? So that's the first thing is pretty clear here. Otherwise, well, when you have something which you don't know, but it's just ellipticity, you are a bit in a situation, I mean, this is usually you have two situations when you consider, if you remember what Louis Cafari said on Wednesday, he said that basically two families of elliptic kind of equation, whether you work on local or actually even non-local, right? There is one which is in the non-divergence case, right? And it's when the d is outside and there is one which is a divergence case is when the d is in between, right? It's divergence of d gradients. So the dual case of the first one, right? The d is inside. So it changes a bit the game here, right? But what Pierre and Schmitz showed, and actually by duality, because you know far more on the non-conversation form, right? So this is a dual formula of the non-conversation form, right? If you multiply by u, you flip the Laplacian on the other one, right? And on this one, so they were able to show that well, if actually when you have a solution like this with initial value smooth enough, then actually rho is in L2 in time and space. And this is for any dimension, right? So they were able to show that actually rho is double integral, right? Square integral. And then it makes sense when you have an equation like this to say so then if it is square integrable, you should be able to get a quick solution, right? And having a stronger solution then things get more complicated. So now what I will try to show you is what n equal to is really a specific case with respect to this norm, right? And for this, I will actually begin to tell you about our proof. So their proof basically are based on energy methods or I mean a lot of things which conform which is natural in kinetic theory and we begin to work on the system which was here in a more public point of view, a realistic point of view, right? So I mean more in the spirit even if it is far easier in this context of what Lianian does, right? So blow up method and things like this. So if you look at this kind of equation, at this point of view, so the first thing that we want to do and somewhere it's a bit like a Louisville theorem, also it's not a Louisville theorem, it's far simpler than that, right? But basically it's to say, well let's first look in a specific situation when we can control the non-narrity, right? When we know that the non-narrity is not too big, right? And for this, since we will want to do blow up, we fix, we should look at this on a fixed ball or a fixed domain, right? So let's say that we define Br, which is just the ball centered at zero and radius one, right? And the q, the qr will be for minus r zero cross Br, right? So this is just a fixed domain in time. I take minus r to zero in time, right? So it's time x. Okay, and the lemma is pretty simple. It says that there exists a delta positive which depends only on the dimension, but the dimension is fixed, you have a delta fix, right? Such that if... Ah, I forgot to talk about the entropy. Okay, that's fine. If the integral on q2 is of course of the ai of the sum one to four in this case, of the ai log of one plus ai dx dt is smaller than eta, then all the ai of tx are bounded by one, let's say, for tx in q1. So this time says, let's say that x here, right? So this is my, let's say, minus two, my q2, right? So if here I can control something which is very close to the entropy, right? I just want to make it positive, one plus ai. So it's like something which is related to the entropy. If I control the entropy there, then actually on, so this is q2, then on q1 I control my a. I just want to, actually it is not to show that it's bounded again, right? Because then by energy, it's very easy to show that it's infinity. So it's just a... So, okay, so why ai log, and let me... So this is, the proof of these things is... Yes. Yeah, sorry. Yeah, I have to choose. I like both. Yeah, it means that, you know, if you control, if you control the log here, I mean, something of the log here, then you control everything. So... The whole thing I should... Delta is small. Small enough. Well, there exists a delta which is small. There exists a delta which is small. So that for whatever the solution that you take of one, right? For any u solution, any ua solution to one in q2, if you have that, then you have that. But the delta does not depend on the solution. It's universal, right? It's a universal thing. So, several things. So the first thing is to say that actually what we use here is dGeorgie. And as we said on Wednesday, I mean, this is an old method, right? And usually, I mean, it's pretty strong and it's not the first time that it is used on this kind of system. Actually, the first one to use this kind of system is Alicacos. Alicacos. A long time ago, right? We already worked on that. The difference here that we have is if you remember the equation that I raised, this is a quadratic sense, right? So what is the idea of this kind of theorem? It says that, well, if you can make it... the quadratic term, if you can control the source term, the bad term, the nonlinear term, you can say, well, it is small enough. Then this is the viscosity, the diffusion here that kills everything and smooths everything, right? And then you're able to control it. So what I claim here is that I can control the strength of my quadratic term by something which is almost linear, right? And that's the big thing. That's the thing that we don't do like Alicacos, actually. We don't choose the Georgie but Nash-Musser, right? You really have to be careful and to have something, right? Because otherwise it's not possible, right? So there is a way, there is a way in some way to deplete enough this term in the quadratic term to make it controllable for what we want, right? In terms of regularity by something which is almost linear, right? And that's a big gain here. That's a big gain that we will be able to push, actually, to any dimension. Otherwise there was no possibility. For this lemma, how important is the fact that the right-hand side is the same for everything or it could be any quadratic thing? What is important? What is important is entropy inequality. So let me write it down. The entropy inequality. I mean, the Georgie is an energy missile, right? I mean, it's really based on an energy thing. So the fact that's all entropy. So everything is quite different from the point of view of David Lett and Fenner, except this. We still work with entropy. So I'm still in some way in the topic of the week because it's based on entropy, right? That's the thing. So what is entropy? Well, here you have to live with the physical entropy that you have here. And since I put delta, thanks to Alessio, I still have the eta that I can use here. So entropy, I will call it eta then. And this eta, which depends on A, which depends on all the components, right? A1, A2, A3, A4, right? And this is the sum from A equal to 1 to 4 of the ai log of ai, right? So this is the quantity. And same thing if you take the system and multiply by log a, ai, integrate while you have something which is dt of eta of a plus, and there is a diffusion. There is a diffusion that you've seen with Laurent, right? And this is the gradient of square root of ai squared. I guess there is a 4 here. Another important. Minus the Laplacian of eta of a, right? And what? And here, the nice thing is, it's not this, it's the sum of the i. Sorry, I want to read it too fast. So this is the sum from i equal to 1 to 4 of the i Laplacian of ai. Well, I have to write it down, right? So this is really this Laplacian of ai log of ai, right? This is because all the ai are different, okay? But this, I can manage. And the thing is here, we'll be able to say that it is non-negative. So in some way, the entropy you don't see, in some way, the entropy depletes naturally the q-term. And here, the proof of this inequality is exactly the same as that for Boltzmann, actually. Those are the proofs that you've seen with David Lett. Because actually what I've done here, I took the sum from i equal to 1 to 4. I multiply the equation by ai, right? And multiply by minus 1 i plus 1, I mean, multiplying by the qi, right? And in some way, I will answer your question here. In this special case, because of the form that we have, right, what you end up is 2 plus, is ln of a1 plus ln of a2 minus ln of a3 minus ln of a4. So basically what you get is ln of a1 a3 minus ln of a2 a4, multiplying by this, the form here, a2 a4 minus a1 a3, right? This is the same. And now if you put the x for this, the y for this, this is ln of x minus ln of y times x minus y. And those two functions are increasing, right? So these are two with the minus. ln is increasing, x is increasing, right? They are equal to x equal y. So at the end, these things have to be always negative, right? I mean, non-positive, right? So it's exactly the proof of the, so for the first one, right? It's exactly the same here. So that was important, is that you have a depletion of the quadratic term. But okay, but it's just for the entropy, right? And just the entropy, I mean, if we knew that entropy controls an infinity, it would be great, but actually it's not true, right? But this is this depletion here that we will use. So in principle, if you would have for every p a smart function like the entropy or anything like that, it could work. I mean, it's like... Well, what you will see, what you certainly will see if I have time, is that actually this depletion, when I work here, is a depletion of one power. So if you have something, so that's why here it could work in all dimensions, you have something which is quadratic, and in some way, in the georgic machinery, it will go something which is almost linear, up to the log, right? So it depletes one power. So if you have p things, the power p, so it depletes just one, you have p minus one. And that's why, thanks, you still have blow, but actually we know that there is blow up, you know, for bigger than p. So epsilon n will be less than one in your theorem. So epsilon n will be... Well, it depends on n. It depends on n, okay. It depends on n. Actually on dimension one, dimension one, you have more. I do not remember, but you can handle bigger power. You can handle the three. I don't remember the exact things. It's dimension two when things begin to be really at the quadratic that there is some difficulty. Of the... So what is the inequality? No. Here? No, you mean after? No, no, no. You will see. You will see. I mean, I hope. I try to take the time. Okay. So that's the first thing. And when you have this kind of things, then why do we do this kind of thing when a fixed queue is because then we want to blow up, right? I mean, you see, Diane did this all the time, right? And you blow up, you blow up. And why you blow up? So what is in this easy case? What's the thing you want? You want to say, well, I want to show that it is bounded, actually, right? I mean, locally bounded, and it is smooth at the point. And let me put here. Now, this is my t. This is my x. I have my solution here. It involves... Let's fix a t. Let's fix the x. And let's say here, I want to show that at this point, this is smooth, right? So at this point, so I want to show this. And I say, well, let's blow up. So what it means to blow up, it means to look here. So it will be a little queue here. And here it's silent squared, right? So you take these small things here, right? And you try, and you blow up the things in order to have the solution, which is there. But you want to do it by keeping the structure of the equation. So this means that you have to do it along the... And again, I mean, and this is, again, and again in different situation in order to keep the structure of the equation, right? So you need to find what is called the universal scaling of the equation. And by this, it means what? It means that you want to consider U epsilon of a new variable, which has a local variable here. Let's say S y, S in time y in x. And you want this to be, well, a blow up of U at the point t. So t plus epsilon squared S x plus epsilon y, right? But you want to do it in order that when you do this, you keep exactly the same equation because it's not a new equation. You have to find the number of epsilon that you have to put here. This depends now of your system. In this case, trust me, it's epsilon squared, right? So this means what it means? It means that if U is solution of my equation, for all tx fix, when I zoom this like this, U epsilon is also solution of my equation, right? But now my U epsilon, when my U is here, my... So this is the life of U. I transform it to come here. This is where my U epsilon live, right? So this is where it's the same. If I can say that whatever the time is, if I zoom enough, I can assure that my non-linearity is not very strong, then I will be locally bounded after a certain epsilon and so look a little smooth, right? So usually when you can do this, it means that you have a system which is called critical, or sub-critical. It means that you have a quantity which controls the scale. And usually when you have that, you don't do this kind of thing, right? There's also methods, the energy method or whatever, right? The thing is here, it will be more complicated to go to the next step. But let's say that for n equal to, it works, and what it was for n equal to is exactly because this norm, this norm for n equal to, which was given by Schmidt and Pierre, for n equal to control the scaling, the universal scaling. So another way to say it is this norm is, I mean, the jargon is critical for the system. And what it means, it just means that, well, you have something which is a global quantity now. For this global quantity, you want to know that when you zoom, if you gain something, because you have a zoom, you look at what you get. And how you do this, well, you look at the L2 norm in Tx. This will be in the Q epsilon of you of t tilde, x tilde, d tilde, x tilde, dx tilde, right? So this is, and this is the a, the sum of the ai squared, right? So this is the L2, or the rho, let's do the rho. This is the same, right? Almost, so let's do the rho squared, the sum, okay? So you have this quantity, right? Which is exactly the L2 norm that you have here, right? And you want to see how this compares to the u there, right? And if you do the change of variable, well, you have epsilon, it becomes epsilon 4, epsilon 4, and there's two epsilon for the change of variable in time, two for the change of variable in x. So it turns out that these things is actually exactly the integral of Q1 or Q2, well, let's put two epsilon and it will be Q2, right? Of the rho squared for the local variable, right? So this is an s, y, ds dy. And what it means, by this I say that it controls the L2 norm because what it means is that, I mean, this quantity, whatever the point that you take, because the L2 norm is bounded, I mean, this is just the Lebesgue theorem for any point, when you zoom, right? At one point, you integrate only on the smaller and smaller and smaller ball, and then it goes to zero. And whatever the point you put, right? When you zoom, it's going to go to zero. So this means that along the zoom, when epsilon goes to zero, this actually will go to zero, right? And if it goes to zero, while the L2 norm controls this A log L norm, right? So for epsilon small enough, this will be small and then locally it's bounded, right? So it depends on the point when you have UR, but you know that at every point you can do that, right? Okay, so basically, it means that the L2 norm is critical for this, and usually when you have this, you can do it with energy missile. That was David Elton, so when we did this, we were interesting to show, I mean, all the Ks, 1D, etc., but that's the thing. Okay, so where do we go from there? If we want to do higher dimension, right? Well, if we do a higher dimension and we want to use these things, we need to find a global control, a global quantity, like the L2, so the L2 is bad, right? Because a global quantity which will control the system and which is independent, I mean, we will shrink through the system, right? And surprisingly, there is one. And the one is, at least formally, is Laplacian minus 1, Laplacian minus 1, so let's give a name for this. U of Tx is equal to Laplacian minus 1 of Rho of Tx. So I consider this quantity. And why this quantity is good? Because actually you have an equation for these things and it's pretty easy. You have DTU minus... Oh, I put a plus. That's sorry. It's not a plus, right? It's a minus, yeah. Because I talked about dual. Oh, yeah. No, no, but this is this one, okay? This is a good one. Okay. So if you take Laplacian minus 1, you have Laplacian minus 1, this one disappears, and this is Laplacian of Laplacian minus 1. So it's been actually verified the equation but now in the non-conservative form, right? So you have the D here of Tx, Laplacian of U equals U. And so you have an infinity norm of this, right? That's a triviality. It means that actually Laplacian minus 1 of Rho, so now you can use here the maximum principle on this quantity, on the Laplacian minus 1 U, right? So these things at Tx, or let's say at T, an infinity is always smaller than what you have at T equals 0. And while I like these things, I mean, it seems to be pretty weak, right? I mean, we want to control Lp norm by something which is very weak, but the good thing of these things is that it is critical for the system, right? And the way to see this, you can see, I mean, when you derivate here, right, you put one epsilon out, right? So Laplacian minus 1 actually take out the epsilon squared, right? It's, I mean, anti-derivative, right? So you take the epsilon squared, and so if you take now the L infinity norm, that is the same, right? So this means that I should have put 8 at this point, right? So this means that this quantity, I mean, through the scaling, is actually preserved, right? So that's good, but this is typically a critical case with respect to... First, that's good, well. There's two problems now, right? There's two problems. Solve, and that is actually... That would be it. Two problems. I mean, the first is that Laplacian minus 1 of rho, because it is an L infinity norm, does not shrink when epsilon goes to zero. Well, I mean, we want to shrink things, right? Because we want to make things when we zoom. We want to make these things small, right? And the L infinity norm, right? I mean, when it was an Lp norm, I could say when I integrate on a smaller and smaller and smaller and smaller things, it becomes smaller. But here is a soup. So when the soup does not change, right? So it does not shrink, right? This is a typical problem that we have when you have critical things which doesn't shrink, right? So that's the first thing. And the second one is, well, how to control L log L from Laplacian minus 1 L infinity. Because even if I am able to say that locally it is small, right? It is not in this weak norm that I want to control things. It is L1 norm, right? So the second... So there's two things to get there. Let's begin with the second one. So the first thing is that actually we control almost this because we can remember that rho is positive or non-negative. And there is this property. So it means it's a measure, right? And there is this property that you control measure to control... When you have a sign, right? So the measure is controlled by any weak norm, right? And it's very easy to show this because you just need to find the integral, right? So this means that actually if you consider the... If you want to find the... On Q2, right? Let's say on these things, if you want to control the rho, right? The rho of Tx, well, put a phi of x, the x, a fixed function phi, right? Which actually will control everything on rho there. Well, while this you can always write that it is an integral on Laplacian minus 1 rho of Tx, Laplacian phi dx, right? And so these things are smaller than Laplacian minus rho of infinity Laplacian phi l1. But it's just... What I said is that actually when you know that something is positive, then any weak norm controls l1, no, right? So that's the thing. So this means that you control l1, right? So you control l1, l infinity, l1, right? And the only thing now that you want is to go from l infinity, l1 to just a bit more, right? And there, that's just... Well, we're not... There's just an application of the... Of the Alexandrov-Backelman-Pucchi theorem. There's a dual of it, right? The dual of Alexandrov-Backelman-Pucchi. Sometimes you add a couple of Krilov, too. I mean, whatever. And this is... And this... And what this theorem says is the same, which works on the equation, if you put v minus D Laplacian v equal f, right? And you want to know what do you need on f in order to have v bounded, right? And it gives you that actually... Well, it gives you that the v, l infinity, is bounded by... Can sometimes have f in... If you have no boundary condition, right? So you have... And if I remember well, it's n plus 1, right? Because it's... And basically, what is a game is to say that the equation on v is a dual of these things, right? And because it is a dual of these things, by duality... Well, there's a bit of work, but let me be fast here. From these things, you're able to show that then... The infinity... Then you have that the norm of rho in ln plus 1 over n. So slightly more than 1, depending on n, these things is controlled by a constant of rho in l infinity, l1, right? So the dual of l infinity is l1, and you work on these things. So I mean, you have to work a bit, because this is a boundary, you have to do it locally, right? But basically, it's working of this, and in the case of l infinity, it was actually already observed by Rivière. That this equation was a dual, and you can get a bit, right? But then it's done, right? So in two steps, but first step, we say, well, then actually, this week norm controls the total mass locally, and then this local mass, because this public equation, which is really, really, really worst that the equation, the other one, right? You have the three. If you remember what we said, we said the best in terms of regularity is the non-divergence form, right? Because basically you get two derivative, right? And then there is energy one when you have the d in the middle, and then while you get one derivative, if the a has some regularity, then you get just one more in terms of u. And here we have the third one, well, basically you don't gain much, but you gain a little bit of integrability, right? And this is exactly what we need here, right? This is to gain a bit of integrability, and here you see for any dimension, this will control the a log l, right? So this, because, so that's why we can do for all the dimension again, right? That's the key things. So okay, so this thing is fine. This thing is fine. If we can show these things, we will shrink, and by shrinking, we make this small, these things control this, which controls this, we have this small, then we have control, our non-linearity, weirdly because a log l is enough to control it in half, right? And we have the regularity, right? So now why we shrink? Why we shrink? And the thing is because we can get slightly better that what I claim there as a lemma. Actually, if you take rho, and I won't work, so if you take rho, non-negative, and again we will use the fact that our rho is non-negative, solution of dt rho minus laplacian of d of tx. I put the tx, remember that it is not a constant, right? rho equals rho. Then there exists alpha, or I should just say for any n, that the dimension, there exists alpha, which depends on n, and if you give the epsilon of n, the little bit of room that we have, then the laplacian minus one of rho at the time t in c alpha is smaller than a certain power. I put this just to show that it's a regularization effect here, it's not a propagation. Something's better, not very important. We can start laplacian minus rho, 0, n infinity. So we show that there's a little bit more than n infinity. You have c alpha, and this c alpha, now this makes it shrink, right? Because you have more regularity, so you have something which now is showing doing the things right. But now, so, and again, right, you don't gain much, but you gain just a little bit, right? And you get in, I mean this is a regularization effect. It's just to say that you have a t equals 0. You have t, there's really a regularization of things which happen. And the two things, I mean, there's not any solution here against the positivity of rho is important. And let me, at least for the specialist, to show you where it comes from, because at the end it's pretty easy actually. If you write the equation dT u, so this is the u, right? u is laplacian minus rho, one rho. And let's say that I want to consider with a minimum d bar of laplacian of u, right? So these things is equal to the di of tx to the laplacian, no, to the what? No, this one is in conservative form. So it's di minus d bar laplacian of u, right? Because it's supposed to be, it's supposed to be solution of that. It's just the right things. But what is this? This is rho. And this is the minimum, so this thing is positive. And now I play the same game comparing the soup and saying the same game what you get is negative. So you have that, actually the u is a sub-solution for a certain fixed d and a sub-super solution for another one, right? And when you have this structure, you can show that then you have the Anakin quality, which gives the c-alpha. And why? Well, I mean, this is pretty easy because it is really a constant. But if I want to do, to say in one minute why it's true, then I will say, well, basically, Louis Cafferli gave the proof on Wednesday using dGeorgie. You don't need dGeorgie here. But just to show you, I mean, to relate to something that you've seen on Wednesday, right? So because if you remember, he said when you want to show that when you have something in Q2 and you want to go to Q1, right? And you know that you are between minus one and one. You have a function which is there. And you want to say that then the soup minus the f on Q2 decrease, right? This is what we call oscillation lemma, right? So if you have this, then again, doing blow up and do it again and again, you get the c-alpha, right? So if you want this, you say, well, let's look if the function is more below zero and more above zero, right? And let's say that it's more, I'm not sure in my case, let's say it's more below, right? So there is more values here when I am below, right? And then I say, well, you know, I mean, it's really pulled back, right? I mean, pulled down. So this means that certainly I will be able to decrease by above, right? And when you want to do this, well, basically, you use what? You use the fact that, well, you have your R sub-solution. And because you have a sub-solution here, you can use all the digital techniques or something else. Actually, here, you take your hosokoin function, but whatever, you can use the things. You can show this and show that then, well, with a two-step, there is a lambda here, right? When you decrease the oscillation, right? And that's it. This is the thing. And when you have those things, well, this usually how you do this knows that your function is bounded here, right? And, okay, so you know that here, it is, it decreases, and you do it again, well, but then here, it decreases, and you do it again, you zoom, and you say, but there, it decreases, and et cetera, and you can see that there's a C-alpha structure which comes in, right? So when you have, again, something which is at order one in a Q1, you do it by rescaling, rescaling, rescaling, and you have the C-alpha structure, right? I mean, it's almost a proof, actually, right? This is very, very typical in this kind of theory. Okay, so I have two minutes, right? So in two minutes, I plan to show a bit of the proof of the D-Georgie, so you can see that it was a bit optimistic. So I won't do this, but at least I will try to show you two minutes, yeah, to convince you, not sure, but convince you that you again, here, when you work on the D-Georgie with respect to the entropy, you kill one power of the non-narrity, right? And why is that? It's because the fundamental things for the D-Georgie method is your reflection here, right, which is, let's say, A, I mean, all the AI. So we do it for all the AI, right? And you put all the AI, all the function together, right? And this is, and you go from, let's say, to one, because I want to show that there is nothing there, right? So I have a CK here. This is a sequence which theoretically converges to one. And for each K fixed, I want to consider the entropy above CK, right? And dissipation above CK. So basically what I want to consider is a sort of thing, a UK, which will be, I mean, we can think of something which is basically all the AI minus K plus log of one plus AI minus CK. CK plus the XZT, right? I mean, this is the strength. What I say will control the strength of my function. It's almost entropy. I mean, it's very close to the entropy. I just have one here. So it's like the entropy of something which is over CK, right? And if I have this UK, I mean, the game, what is the game is to show that this UK plus one will be bigger than CK UK beta or gamma, with gamma bigger than one. So I won't show you how to do this, but this is basically the game. You want to have a sequence now. You have a sequence, right? The energy above, above, above, above. And you want to compare them. And you can compare them in nonlinear things like this. This blows up, but this is nonlinear R. The first one is very small. This will converge to zero. And it's exactly the same, which says that this is the delta. If U zero is smaller than delta, there is a delta, depending only on this constant here, the C, such that then the limit of UK is equal to, oh, when K converge to plus infinity, but it converge towards the energy which is here. And if the energy which is here is zero, it means that actually my U is smaller than one. Yes, that's the general framework. That's very nice, actually. But this means what? This means that I need to be able to compare, to work on the entropy from one K to one K plus one. So I need to work on the entropy that it is not, is not here anymore, right? But not the real entropy, the entropy that I have just above CK. And this will be the last thing at times that I can control this. So dt of the sum of the ai minus CK plus, log of one plus ai minus CK plus, right? So this is the quantity that I want to control. This plus dissipation, so this is good thing, right, is smaller than, well, to what? And that's why we have to see what we have. Well, this is what? This is the sum of the i of the ai minus CK plus qi of a. Well, these things I don't know and it is very bad because it's of the two, right? But what I say is that, well, these things is equal to the sum of ai minus CK plus, qi of ai minus CK plus, and I put minus q of a, qi of a minus CK plus. I guess I have a problem of sign here. Okay, this. So the first thing I say, this is negative. No, I say this is negative, right? Because now it's exactly the same structure that we add on the q. So I kill the big value of the things and I drop this, but this is q minus q of a. You see that the problem is when a is very big, but I mean these things kill one of the higher order of the things, right? So that's why you kill it here, right? So locally if you're interested only on the entropy, when you do it step by step, you kill it, right? And the idea is because, well, it's because this quantity, because you really work with the physical entropy, not the L21, not an entropy, really you stick to the entropy and because you stick to the entropy, when you change from CK to CK plus one, you still kill, I mean, deplete, not kill completely, but deplete the non-narrative of the big value, right? And that's why it works. I think it's a good time to stop.