 And today, I will only speak about 2D. And tomorrow I'll speak about 3D. So yesterday, I just did a small thing here. So let me recall very rapidly just the 2D part of it. So if you remember, I said here, Gevray, here I said Sobolev. And they put gamma was 0. And here I said gamma less or equal than 1 half. And so here there are two papers. So there is a paper by Bedrosian and myself. And that corresponds actually to mu equals 0. And then the other paper was also with the v call. And that's mu larger than 0. And the Sobolev case was done by Bedrosian v call. And I think it was dark at Princeton or Maryland. I don't know, Wong. Anyway, so I'm going to talk today. I'll explain a little bit these two results. So that's the one you have, the mu equals 0. So OK, so maybe I can recall very rapidly, since maybe some are new faces. But what we started last time. So I'm writing the equation. I mean, we start with the Navier-Stokes equation with the viscosity, because mu equals 0 will correspond to the Euler case. And we are looking at perturbation of the so-called Quet flow. So this is like a stationary solution. So today I am working in the case where I am in the torus times r. So x is in the torus, y is in r. So then if I write down my solution as the Quet flow plus the perturbation, then I can write an equation for u. u will be solving the following equation, y dx u plus u grad u. So that's the equation satisfied by u. So think of u as a small perturbation. So I'm trying to understand this with a small initial data. And I'm trying to understand the long-time behavior of this. OK, so yesterday I ended by telling you something about a change of variable. And OK, so I mean, if you look here at this equation, like the most problematic term is this term, because this is a linear term. And what I explained last time is that there is a change of variable that allows us to get rid of this. OK? Did you consider an additional term, u2 minus u2z? Yeah, OK. In two dimensions, that term is not, sorry. Yeah, no, it is. Yeah, you are right. No, no, here I should be writing it since I'm writing the equation on u, actually. But then it disappears when I take the curve. Yeah, yeah, you are right, so it's. Sorry? The two you put up are correct. That's correct, also. If I'm consistent with my notations, yes. Yeah, actually, it's true that in the 2D case, we never write the equation for this. We immediately write the equation on omega. So yeah, so I said when we are in two dimensions, actually, we don't work with the velocity, but we work with the vorticity. Like tomorrow, I will be more working with the velocity, actually. So immediately, the interesting quantity is the vorticity, so it's dx u2 minus dy u1. And then we can write the following equation for the vorticity dx omega plus u squared omega. So then we have this equation. And it is actually a closed equation on omega because you can recover u by the following non-local cooperation. Now, what I would like to do is, and I think we started doing that yesterday, is to tell you about to split things into linear effects and nonlinear effects. So the first three things I'm going to talk about are three linear effects. So the first one that I described last time was a change of variable that allow us to get rid of this term. I'm going to write it on the nonlinear equation. I mean, I did it last time so that we still keep this guy. And that was the following change of variable that we went through last time. So that's the natural change of variable to do. And if I write down capital omega to be my small omega, then the following equation becomes dT capital omega. Then the good thing is that this term disappears. And I get grad per phi omega, laplacian l of omega. So that's the equation we get. And laplacian l phi equal omega. And here, the important fact is this grad l, which is the following gradient. So I think I explained last time that the gradient in x, y of small omega get transformed in this gradient l of capital omega. Capital x is no longer correct. No, it is. Or y and phi messages. It's OK. I mean, you just go around. You just go around. And actually, I think that's, I mean, since you asked this question, it is really behind the terminology mixing that we use. Basically, if you just think about the transport equation. So if you look at this transport equation, and let's think that you start with a small perturbation. So it's periodic in x. Let's say you start with a small perturbation like that. This is your perturbation at time 0, let's say. So then let's say this is like y equals 0. So what's here will not really be moving. But then if you look at later times, this will be tilted, then more tilted. And then if you look, then it becomes, start becoming something like that. Because of, exactly because of the periodicity. And OK, I mean, it's not difficult to see. You take a weak limit. The weak limit will be the average in x, actually. So if you, I mean, it's a simple exercise. If you take the following equation, let's forget about the vorticity. I mean, you can solve it. So it's just the initial data, the y stays, x minus ty. So that's the exact solution. And this converges weakly when t goes to infinity to the average in x. So we said x is periodic, so it's just the average. OK, maybe I can write it down like this. The average is 1, let's say, so it's x. So you go to a function of y. Particular, that is even consistent with the fact that you go back to a shear flow. I mean, at least in the inviscid case, you go to a shear flow, not necessarily the quet, but the perturbation of quet. But you go to something that is still a function of y. So in particular, I mean, you can see it at the linear level. Let's say at the linear level. I forget about this term and forget about the viscosity. So if you forget about the nonlinear term and the viscosity, then you have dt of capital omega, which is 0, which means that nothing is happening, because all shear flows are stationary solutions. OK, so actually, another interesting fact about this that I want to maybe mention is that's one of the reasons actually why, at least as of now, our result, we only know how to do it if we perturb quet. I mean, we are trying to understand how to generalize it to other perturbation, not of quet, but perturbation of other shear flows. One interesting fact is the fact that if you take Fourier transform of omega, let's say small omega, you still get a transport equation on the Fourier side. And that's, I mean, there is a Fourier side explanation of this picture that one can understand, which is the fact that you are sending information to high frequencies in eta, actually. So if I take omega hat, so remember, here I'm taking x, y. It becomes k eta. I mean, it may be confusing, because afterwards I will be using k eta for the Fourier in capital X and y. So the eta I'm writing here will be different from the eta I'll be writing afterwards, but it doesn't matter. So y dx becomes k d eta. I mean, there may be minus, I forgot whether it's a minus or not. So then your omega hat of t, k eta, becomes your omega initial hat of k eta plus tk. So what's happening here is that for k different from 0, so for k equals 0, nothing is happening. So you are keeping the mode k equals 0, nothing happening to it. But for k different from 0, what's going on is that the information is being sent to high frequencies. And OK, since you are taking your omega initial hat in some space, let's say some sobolev or whatever space, so then you start seeing the decay. You start seeing some decay when t goes to infinity, but for k different from 0. So this is what is called mixing, which in a sense, one can try to understand it as, the way I understand it is more or less like dispersion, but you are replacing the Fourier side and the physical side, basically. OK, so this is the first linear effect I wanted to tell you about, which is the change of variable. So this is the first, I mean, it's a simple thing, but still you start seeing some interesting features from it. So now this change of variable has two, I mean, you can see has effects here. Instead of Laplacian, now you have Laplacian L. And here, instead of Laplacian, you have Laplacian L. And each one of these two Laplacian L will give us some linear effect that will be important. So the next linear effect I want to tell you about is so-called enhanced dissipation. So this is an effect when, for viscosity, it is strictly bigger than 0. So it is a linear effect. So let me just write down this linear equation on omega. So we just have dt omega equal nu Laplacian L omega. So now you can see what is your Laplacian L. Your Laplacian L has a t in it. So if you write this on the Fourier side, you will have, OK, so that's, don't get confused. The eta there and the eta here are not the same. There is a relation. I mean, it's a small exercise one can do. But here I'm taking the Fourier in capital X, capital Y. So then this gets minus nu k square plus eta minus tk square omega. Make sense for everyone? So then, I mean, you can solve this. You can solve this. I mean, you can write your formula. Basically, you can write omega tk eta equal exponential minus. So we can integrate between 0 and t. We should keep the nu here k square plus eta minus ks square ds times your initial data. Of course, the initial data, the initial capital omega is the same as the initial small omega, because at time 0, x and small x are the same. So this is the formula you get. So then, I mean, what is the interesting fact here is the fact that we have a t here, t square, and you integrate it, it becomes t cubed. So then you are, I mean, there is a small calculation one can do to prove that uniformly in k and eta, at least when k is different from 0. So I mean, there is an important effect when k is different from 0 is the fact that this term will give you more viscous effect. So basically, I mean, it's not a difficult calculation to do is that your omega hat of tk eta for k different from 0, one can prove that uniformly in k eta. Of course, you can already see that here there are times where you may have small problems when t, when this starts becoming 0, you can lose some uniformity. You have to be careful about those times. But what one can prove is something of this type, something that depends on the initial data. So somehow you get a decay of this nature with some constant c. So that's why the name enhanced dissipation, because now you see dissipation over time scales of order nu to the minus 1 third. So here we have dissipation over times like nu minus 1 third. I mean, are you all used to the heat equation? Normally, the heat equation gives you decay over times like nu to the minus 1. And so this is a very important fact. I mean, here we are more thinking about very small viscosity, so if I'm thinking that my viscosity is like 10 to the minus 30, this means it gives me a decay over times like 10 to the 10 rather than 10 to the 30. So that's how you should understand this. So it gives much stronger dissipation for non-zero frequencies. And by the way, this is really the type of, I mean, it is enhanced dissipation by mixing. And this is really what you have when you put sugar in your coffee and you stir it. This is more or less the effect, like the reason you try to rotate your fluid is that you want to enhance the effect of the dissipation. OK. OK, so what this happens for k different from 0, OK? Of course, like 0 modes, for k equals 0, 0 modes will just experience the regular heat equation dissipation, right? There's no enhanced dissipation for the 0 mode. OK. So again, this effect came from this Laplacian L, like the Laplacian that became Laplacian L in the new coordinate system, OK? So now there is another effect that comes from this one, which is actually an inviscid effect, not the viscous effect. So the third linear effect is so-called inviscid dumping. And the reason this is, I don't know when the terminology came in, but I think it's around the 70s or the 80s. I mean, the terminology came related to the Landau dumping in plasma physics, even though historically, like Landau dumping is, I think, around the 40s. But the first instances of inviscid dumping go back to a paper 1907 by Orr. I think it's somehow a paper that was forgotten, but I think now it's came back as a very important paper. And OK, so what's the inviscid dumping? I mean, the whole thing is really in this formula. I mean, just write down the formula and you can see what's going on. So if I go to the Fourier side, so I look at my phi hat. So I think I have to put a minus, OK? So if I am, OK, let me write it down like that. If I ignore the viscous effect and I ignore the nonlinear term, so if I ignore both nonlinear term viscous effect, then omega, then this will be just the initial date. I can't forget about the t, OK? So now what's happening? So if you look here at this and you look at it a little bit, you can observe two facts. You can observe two facts. Like if you look at t large, when t goes to infinity, your phi will decay like 1 over t squared, at least for k different from 0. So if k different from 0, phi hat will decay like 1 over t squared, OK? So that's the first thing you can observe. The other thing you can observe is that if k and eta have the same sign, before you get this decay, you can have some growth. So there is a transient growth before the decay, right? So after possible transient growth, OK? So all this is just in this. All this is just in the denominator. Just look at the denominator. Think of this as being constant. I mean, just you are drawing the function 1 over 1 plus x squared. If you look at the function 1, it's just like it goes up and down, right? So I mean, again, here all what's going on is this is like the picture. If I draw the picture of this, let's say at least thinking of this is time, this is how it goes up. I'm going to draw it in the case k and eta have the same sign. If I take the time here eta over k, then I can start here. This is where I reach my maximum. So then my function does this and then decays. So here it decays like 1 over 1 over t squared, OK? So this is what is called, the reason this is called inviscid dumping, so it means that the stream function will decay like 1 over t squared. But also the velocity will decay. Now the velocity, remember, I think when I made the change of variable, I also said that I'm going to use the following capital X, capital Y to be the velocity in the new coordinate, and that this will be the grad per l of phi. I think I explained a little bit that. And this means this is dy minus t dx is a minus here dx of phi. So then on the Fourier side, this means that your u hat of t k eta, OK, modulo of sines, OK, let me not think about sines, so then I can just put absolute values, times phi hat. But then phi hat, we wrote it there, divided by k squared plus eta minus t k squared, OK? So what does this imply? It implies that u1, so for k different from 0, u hat 1 will decay like 1 over t because of this. This will kill 1 over 1 of those things. And u2 hat will decay like 1 over t squared, OK? So here I'm not being very precise. In this sign, I mean, it's not uniform in k eta. All those things are not completely uniform in k eta. You have to worry a little bit about it. And OK, since I like giving exercises, so let me give a small exercise for those who like to do these exercises. Maybe I'll erase this because I'm not going to use some. So the exercise says the following. Decay costs regularity. This is really the decay costs regularity. And this is an important time. Yeah, decay, what I mean by decay is decay in time. So decay costs regularity. Basically, what you can prove is that, so here the sign different means k different from 0. Different means the Fourier. It's just you take u1 and you remove the average. So if you look at this, so here I'm not putting a hat. So I'm taking this in a physical space. So I'm looking at it in HS. You can prove that for the linear problem, so I'm just taking the linear problem without viscosity. So this term is not there. This term is not there. So it's just this formula with a fixed omega hat, which is just omega hat initial. So this exercise is related to this formula, just this formula. Take this formula with a fixed omega hat initial, fixed initial data. What you can prove is the following thing. 1 over t, so I'm using the Japanese bracket, omega initial in HS plus 1. Same thing if you look at u2 HS plus 2, or HS. This you can get 1 over t squared omega initial in HS plus 2. So if you think a little bit about it, you are losing three derivatives in this. I mean, if you want to get the 1 over t squared decay, you lost three derivatives. Because normally u is 1 derivative better than the vorticity. But to extract the decay, you are losing here three derivatives. So here there are three derivatives lost to extract the decay. Three derivative loss to get decay. OK, so for those familiar with Landau-Dumping, for the Vlasov case, in the Vlasov case actually the decay here, the amount of decay depends also on the regularity, meaning that if you put more regularity, if you put Gevry, if you put analytic data, you can get exponential decay. If you put higher Sobolev, you can get here and s, depending on the regularity you have. So that's why the Landau-Dumping from this point of view is easier in terms of decay. So this is a completely linear effect, again. And the interesting thing about this effect that it is actually independent of the viscosity. So I wrote you here the formula with putting here the initial data. OK, but I mean, even if you put this, this will be a term that's just helping you, actually. So we will still get this, even with the viscosity, but the point is that all this is uniform in the viscosity. So you can see here, at least, I'm trying to do both with viscous term, with the viscosity and without the viscosity, but you can see at least there are three different timescales in this problem. So there is a timescale on which the inviscid dumping happens, which is independent of the viscosity. So that's one thing. So then you have another timescale, which is the following timescale, which gives you this sort of enhanced dissipation on this timescale for non-zero modes. And then there is also the zero mode itself. The zero mode itself will decays at the regular timescale of the heat equation, like nu minus 1. Any questions, maybe? Before I move to the nonlinear effects? Yes? What? Not equal, this one. What? Human. It's not equal. Oh, so not equal means the non-zero modes. So maybe that's like, there's a notation. OK, so maybe that's like, there's a notation. Maybe I can, and actually that's the reason why the two is up, the one and the two are up, is exactly because we use, in the paper actually, we write it always like that, to talk about the Fourier. When we do the Fourier transform, we put the k here, eta here. And then this guy, we distinguish the zero mode. This is just the average in x. OK, let me write it without the Fourier. So this, what I mean by this, is just the average in x. And this is nu minus the average, OK? I mean here, I wrote things in the physical sides, because then I wanted to use the Sobolev. But like most of the calculation are really done on the Fourier side. OK, now let's go to the nonlinear effects. And then I hope I can state a theorem, or maybe next, I don't know if I'll have time. So nonlinear effects. Yeah, yeah, still have time. So nonlinear effects. OK, so what do I mean, what are the nonlinear effects? So if you look at it, nonlinear effect means this term. OK, I mean, you look at this term, it is like a transport term. It is a transport term. And the first thing you can think about is that, it's not that bad, because I just told you about the fact that this guy is actually the guy I wrote here. So I have decay. I have decay of it. I mean, there is decay, but modulo the zero mode. The zero mode is not decaying, right? The zero mode is not decaying, so I need to do something with the zero mode, right? So that's the first thing you can think about. And it immediately requires actually to make a nonlinear change of variable, right? So basically the first thing we can see from this term is that we need a nonlinear change of variable, OK? It turns out, I mean, this was one of the parts where we spent a lot of time to think about what was the right way of making the change of variable. I mean, there are many possibilities one can think about. Because if you think, I mean, like the change of variable here, I mean, you can think about it as some sort of Lagrangian change of variable, right? So that was the first thing you can think about is to do, you can say, OK, let's do a Lagrangian change of variable. It turns out that's not the right, I mean, that's not the best thing to do. And what we ended up doing is actually just get rid of the zero mode, right? So basically what we do is that we take x to be x minus ty minus the integral between 0 and t of, OK, so this guy, u0, 1. So u0, 1 is just a function of y. Doesn't depend on x. That's one of the advantages that I'm doing the change of variable here with something. In x, it is just a translation, syds, which is actually the piece that doesn't decay, OK? Of course, there is normally, I mean, you should be worried about this guy. I mean, this guy decays like 1 over t. So the decay of this guy, it turns out, I mean, is critical. But actually, in the first version of the paper, we had a logarithmic correction because of this guy. I mean, we thought that we need a logarithmic correction, the statement of the theorem. But then there is a very subtle cancellation that actually told us that we don't need that logarithmic correction. There is some sort of cancellation. But I mean, it was not important in the proof anyway. But somehow, the advantage of just doing this rather than doing the Lagrangian change of variable is that this is just a translation in x. And it has advantages whenever you look at this picture, which I'm going to talk about again. OK, so that's the change of variable. And it turns out that also took us a while to think about that we also replace y. So instead of using y, we make the following change of variable. We replace y by this quantity, by the quantity that we put here. So this is the change of variable we make. So here, I'm making the change of variable. I'm trying to explain things when u equals 0. For a reason, so that at least you understand the nu equals 0. Tomorrow, I'll be more talking about nu larger than 0. And actually, there is some subtlety in how we change this, in the case nu not equal to 0. We have to take into account a little bit some dissipative effect. So our change of variable will not be completely just transport type. OK, so this is the change of variable. The hidden reason for this is exactly to keep these operators. I mean, what you see here, I want to keep it in a nice way. I don't want this to become, I mean, I want to have something that looks like this when I invert my variable. I want to have something that looks like this when I invert my Laplacian. I'll explain it. I'm going now to explain this. So when you make this change of variable, what happens in your equations? So what happens in your equations? Now, my Fourier is xv gives k eta, OK? So now, the Fourier is really in this xv, in this nonlinear xv, OK? So if we do that, actually, the new equation you get is the following. Let me write it in some, OK, let me use the following notation, f of txv is my omega in txy. And it corresponds to what I was calling capital omega. But now, since I'm using a different change of variable, maybe I'll use a different notation for it. So this is the equation now we get. And I'm going to only insist on u equals 0, OK? We get the following equation, dtf plus u, I mean, the u here has nothing to do with the u I was using there. And it's a slightly long calculation with change of variables and so on. The whole thing is really change of variables. And the u looks like this, OK? It has 0 dtv. I'll explain a little bit some of the notations. That's v prime, grad perp v, projection different from 0 of phi, OK? So this is what we get. I need to explain to you a few things. So v is a function of t and y. I mean, just based on the change of variable. I'm just going to tell you about it right now a few things, but then we'll forget about it. So v is a function of ty. Again, one of the advantages is that I'm not mixing y and x here. I mean, it's very important in the proof. It's very important in the proof. So what I call dtv is just dtv. What I call here dtv is this notation is you make the change of variable, you write it in the v coordinate. So it's really dt of v of ty. So you see that there is a change of variable that takes y to v. So you take the time derivative in y. So this is a time derivative in y, but then you express it in v. I mean, that's a notation. That's a term that comes. It's not difficult to see. I mean, whenever you compute the dt, the dt in y and the dt in v are not the same. And there is a change of variable that comes. And it is this term. And what I call here v prime is more or less the same thing, but with y derivatives. So the v prime is like a dy of v, but then expressed in tv. Again, OK? Just, I mean, OK, this is at the technical level, but it's really a nonlinear effect. I mean, the reason I'm trying to insist upon this is that there's a nonlinear effect coming from the change of variable and the choice of the change of variable, right? So I think I should have insisted on this. Actually, I said one thing, which was maybe this u1, it's true that the u1 decays like 1 over t. But I explained to you last time that in 2d, this happens in 2d in 3d, it doesn't happen completely. In 2d, what you see here is not the u1, but it's the grad perp of phi. Actually, the grad perp of phi decays like 1 over t square, because I'm not seeing this term there. I mean, as I explained a little bit last time, there is a nice cancellation here in this term that actually makes this guy 1 over t square. So basically, you come back here. You come back here. We got rid of the zero mode. So now this guy actually decays like 1 over t square. The derivative doesn't lose me more because there's no t in it. So this actually will decay like 1 over t square. There are 10 pages of calculation to actually also prove that this guy decays 1 over t square minus epsilon. So at the end of the day, when I wrote the equation in this way, I managed to rewrite it with a velocity that is integrable, like better than 1 over t decays, actually 1 over t square minus epsilon. So that's the effect of two things, of the change of variable, removing this non-decaying piece of this interesting sort of, we call it like some sort of null condition, like some nice cancellation between the terms that have growth. So then you end up with a form which is good if you are trying to prove that, I mean, if you are trying to prove at the end of the day that your f converges to some f infinity, and that will be the goal, is to prove that your f will converge to some f infinity. So this is a good form to work with. Any questions about the stage? If you put exactly capital U, the whole thing, yes. You don't put this. And then you put an x dependence here. So then it mixes all the issues that it mixes x and y. And I think it completely destroys this. So actually, I didn't say one thing here, which I think is good, because in the linear case, I had Laplacian l phi equal omega. In the nonlinear case, that will not be the case, right? Because my change of variable is nonlinear. So normally, you should be asking me, what is this phi? How you relate phi to f? So the way you relate phi to f, so let me erase this. I'm just going to forget about these guys from now on, because I explained more or less how that comes. And you can ignore that. So the phi will satisfy some Laplacian, what we call Laplacian t phi equal f. And the reason we make this and this is that I'm trying to make this Laplacian t look very similar to the Laplacian l. So there is a whole technical part, what we call elliptic estimates, that try to say that Laplacian t behaves like Laplacian l. If I make different change of variables here, this will not be the case. OK. Maybe I can just mention one instance where, yeah, let me write down the formula for it, because then at least you can see why we make the, I mean, I can write down the formula for it, but then we'll forget about it. So Laplacian t phi is dxx phi. So that doesn't stay the same, because you are just translating in x. You are not doing anything. But OK, let me not go write you the whole thing. But the whole thing here, I mean, you have some coefficients. Let me not write them down. So basically, by making the change here normally, there is a term which is like v prime square. I mean, there is another term, a mixed term. But I like this expression to stay the same. dv minus t dx. If I don't make the change of variable, if I keep y here and not v, there will be some v prime somewhere. And I will not see this piece exactly as it is there. OK, sorry, I think it's 1 plus v prime. No, no, no, it's v prime, because v prime is like 1. So think of this as being small. I mean, this is small, so v prime is like 1. So v prime is like 1. So this is a perturbation. It's a small perturbation of Laplacian l. OK, so this is the first nonlinear effect, which is the change of coordinate. So now the next very important fact is the so-called nonlinear resonances. In the plasma case, it is called echoes. I mean, we can call them also echoes. And let me try to motivate this. And I think the motivation is, I mean, I insist more on the motivation, because I think it explains why there is Geoffrey, actually. OK, so we have this equation, dtf plus u grad f equals 0. I'm just taking the inviscid case. And let's say I'm trying to prove some energy estimates. So the problem is about global estimates. And I know that my u, which you see there, will decay like 1 over t squared, right? So forget about this one. I mean, it's more like this one. Let's say you try to prove Sobolev bounds. So if you try to do Sobolev estimates, so then you apply s derivative to your f, OK? So we are all used to apply Leibniz's rule. And you just worry about two extreme cases, either all derivative heat here or all derivative heat here. So then you get a term like this. You get all bunch of other terms. And you have a term like that. Now, if you try to do Sobolev bounds, you multiply by dSf, multiply by dSf. And then you get dT of dSf squared over 2 integrate. What do you get? Plus dSu dSf squared f. So you have a term like that. Let's say that's equal. And you have a term like this. Oh, sorry. OK. Let's put absolute values everywhere. So the other term you can integrate. It turns out like the u you have here actually is not divergence free. But it's not a problem. I mean, that's not an issue. Plus something like gradu, let's say dSf after you make the integration by part and so on. OK, let's put these two guys. So the second term is very good. Actually, this term is perfect because the derivatives are hitting f. So these are the high frequencies. And I said gradient u, I said that u decays. But I have to lose, I have to spend some derivatives. But it's just a gradient of u. So if I take s large, I will still see that this decays like 1 over t squared. Actually, because of the dt v minus, OK. But this term is fine. It is integrable. It will not bother you. Now, if you look at this guy, if you look at this guy, actually, f doesn't decay. So this guy doesn't decay. And this guy will decay. But to decay, you lose derivatives. So if I say that dSu is, let's say, if you put this, I mean, you can put this in L infinity, let's say. And you put this in L2. And you put this in L2. So if I want to say that dSu in L2 decays like 1 over t squared, at the estimate I explained there, then I have to put f. f is like the omega in Hs plus 2, OK. That's how I can, if I want to get the decay, right? If I want to get the decay, I have to lose derivatives. And then it doesn't close. Clearly, I mean, you are losing derivatives. And so I mean, from this sort of heuristic, it seems that the problem is completely lost. I mean, because you are losing two derivatives. I mean, usually in these problems, if you lose one derivative, you hope that you can close it in analytic. If you are losing two, nothing you can do. OK, so that's how starts like you have to be more precise in the analysis and then stand where the loss happens and so on, right? So that's now what I'm going to talk about. Next thing is, yeah, let me take like seven minutes or maybe more, 10 minutes, 12. Oh, great. So then I can tell you the whole non-resonance thing. OK, so actually the whole picture, I already draw it. The whole thing comes from this picture. The whole thing comes from here, right? The whole thing is really hidden here. Because like the u, the u is what? The u you see here, that's like the grad perp of phi. That grad perp of phi is more or less your f hat divided by k squared plus eta minus kt squared. OK, so this term can be problematic at these times, OK? Can be problematic at these times. So that's, OK, now you let me go fast. I mean, I'm going to skip one thing, which is I just say that we are going to use para-products. So we are using the bony para-product. There is a small heuristic that I can explain why we do it. But it's more or less in the formula you see there. Like whenever you take derivatives, I mean the guy you put on it, derivatives, is the high frequency. The other one is low frequency. So basically, I'm trying now to draw a toy model that will mimic the maximum possible growth coming from this term, right? So all this, all what I'm doing now is more like heuristic. But somehow it will give me what is like the maximum loss of regularity I can get from this. So notice I'm saying loss of regularity instead of growth. Because as you saw there, I can play between loss of regularity or decay. So this term, I mean for those familiar with, I can write it down as, I can split it in two pieces, u high, f low. So I mean for those who are familiar with para-products, that's really like a para-product. For those who are not familiar, think about a product as a convolution on the Fourier side, right? And then when you write down a convolution on the Fourier side, let's say if I say that I can decide which one of them has higher frequency. And if this is low frequency, if this is the low frequency, that this is more or less the term that you'll be seeing more like here. So that this term will be OK. The problematic term is when the high frequency is on u, because that's where you start losing, right? So basically, this is the important term that can give me growth or loss of regularity. So then what we do, OK, so then I look at it and the u high is actually coming from the f. Because the u high, remember that the u high was this grad perp of phi. So a good candidate for the right heuristic is that if I write down, so this is like a toy model. I mean, we write it as a toy model in the paper. So here I'm writing the product as a convolution on the Fourier side, OK? And I'm only taking into account this term. So this is the low frequency. There is a t k minus l eta minus c dc. Actually, here precisely, actually, I wrote only one piece of this scalar product. I mean, the most important term in it. OK, so this is the low frequency. So this is the term I said here doesn't decay. That corresponds to this guy. The guy you see here corresponds to this part, more or less. And part of the loss of the derivative, for instance, is here. You see a c here. In the denominator, you have a c square. But when t is exactly equal to c over l, this term is not there. So you are not gaining those two derivatives. And you are losing the derivative, right? If you are. But then if you want to get decay, then you have to lose even more derivatives. So basically, this is where the loss comes from. It comes exactly at the time when this lc talks to this k eta. And you are at times where t is like c over l, OK? I mean, this is like low frequencies. You can think of this as fixed equal 1, let's say. So now, how do we draw our toy model for this? So the toy model, I'm going to jump a little bit. I'm going slightly fast for one. So it turns out in all this, I can think of c as being equal to eta. There's no big, nothing very dramatic happens between c and eta. So let me ignore the integral in c. I mean, it has to be proved. But think of c as being equal to eta. That's the first simplification, c equal eta. So then, and let me ignore this term. Just think of it as a fixed constant. I mean, it becomes like some sort of kappa. So again, I mean, here eta now is fixed. So you can think of this as a system of ODE's. It is a system of ODE's that couples all the modes l, all the modes l. Only modes l different from 0 appear here. And what we are trying to do is what to find is what is the maximum growth of this system of ODE. So at the end of the day, there are absolute values everywhere, because I'm trying to find the maximum growth. Another reduction that one can do is that I just need to look at l such that l is less than square root of eta. It's not difficult to see, because if l gets large, this term starts helping you and start really killing some of the loss that you are getting. So really, the difficulty is when l is less than square root of eta. Now, one can observe this system of ODE and try to find what is the largest possible growth. How the largest possible growth can happen. And the largest possible growth can happen in the following scenario. So eta is fixed. And I'm taking k to go from square root of eta down to k equal 1. I'm starting from some k0 square root of eta down to k equal 1. And then I'm looking at times eta over k0 eta over k0 minus 1 or the weight of time eta. So remember that if I look at the mode k0, which is this one, it will grow at this time, eta over k0. So that's where that mode, the mode k0, the effect of it becomes stronger, because that's where this guy disappears. So basically, really, the worst path for the energy, I mean, the worst growth for the energy comes when k0 talks to k0 minus 1. Then at later time, k0 minus 1 will talk to k0 minus 2 and so on. So then we write down the solution of this ODE. And we see what is the worst possible growth. And then you find out that the worst possible growth is like exponential c square root of eta. And that's how you see that this is the Geuveret type of things you have to take. So basically, since this scheme gives you this growth, you will take your initial data such that your initial data is somehow eta to some power s, where s is larger than 1 half. So then even though you go through this growth, it's still fine. Maybe I should stop here. I didn't give you a statement, but I think at least I try to explain what was the next time. Yeah, right. Yeah, OK. Any questions? Questions. No, maybe I have to just so that I can clarify how I have mine. So we are really interested in the detail because of the anyone who don't feel as long as people think. I know just a bound in a two set. We really want the detail because when you integrate, you have to go to deal. So basically, you want that f converges. Right, so what you want, you want that your f of t converges to some f infinity when t goes to infinity. So then if you want something like that to happen, I mean you want these guys to be integrable. Like you want time derivative to be integrable, but to get time derivative to be integrable, you don't want to lose derivatives there. Because saying that f converges, saying that f converges, it means that then you are looking like, you start looking like linear, right? Good. Other questions? So, can we show, yes, bigger than one half is a yes. So we are working on that now. I mean like I'm with a postdoc, you dank. We are trying actually like, so you see in this toy model, I mean I got rid of many things. I put absolute values. I said, okay, let me just look at this. But does this really happen? I mean the question you can ask, because this at the end will give me this growth. But then the question is, does this really happen? Are there cancellations that can make this? You put absolute values everywhere. Yeah, whenever now I start looking at this sort of growth, the way the growth here is computed, is that you put absolute values and then you have some large ODE system, but then you say, okay, I mean I can simplify it. But then the question is, okay, does this really happen? So then you go back and you try to put energy in this mode and you try to follow it, how it goes through this mode, and you try to make sure that the growth really happens. To justify that the jewelry is really necessary in the proof. Or it can be a very different exponent. No, no, yeah, but here I think it is the right one. The jewelry we have is the right one. Okay, thank you.