 Thank you, Camilo, and thank you the organizers for having us here. It's really been a pleasure and also very productive and because of that I changed the title. So, um, sorry for the change. This is all joint work with Tristan and Nader who are both here. So there will be some overlap with Camilo's talk tomorrow, but I hope not too much and you should really view of these talks as complimentary of each other, I hope. So we're going to deal first of all with the Euler equation. Incompressible, density one, etc. So this is 3D Euler for the velocity field U, P is the pressure. Although we're going to talk about turbulence and turbulence is generated at the boundaries, the statistical theories of turbulence deals with the behavior away from the boundaries. So for all practical purposes, we might as well consider a domain without boundaries and for throughout the stock, the domain is T3. And, okay, the conservation law that we have for 3D Euler is the energy, the kinetic energy, and oh, by the way, you can put force if you want, if you want. And if you can compute the rate of change of energy in 3D Euler, you get d by dt of E, well, is equal to minus U grad U inner product U minus grad P dot U. And if you want to include the force, you can. And because of incompressibility, meaning the vector field is divergence free, you can move this out. You get the divergence here. Here you also get a divergence. And if you're allowed to do this integration by parts, you get either the energy balance or if there's no force, energy conservation. So this is if U is smooth. So the on-sagri conjecture deals with how smooth you have to be in order to do this computation. And, okay, it's not very complicated. We are dealing with weak solutions. So weak solutions just means you're solving this in D prime of time cross the torus. So in particular, you can test with smooth objects. And in particular, you can test with Sj squared of U. Sj is the projection liter with paleon frequencies less than 2 to the j. And if you do that, then you can do d by dt of the Sj U L2 squared, 1 half is equal to what is called the energy flux. The flux through the boundary of this Fourier shell, and that will be equal after integration by parts to U tensor U gradient Sj squared U plus the pressure if you want. But the pressure disappears between because U is divergence free. Sj squared U is divergence free as well. So this disappears. And if you want, you can put this maybe for a lot of the talk, we'll ignore the force. So the question is, when is this zero? Well, you can put one of the Sj's here. And you can also subtract this. Why am I allowed to subtract this? Because this is zero. Now it's zero because Sj U is a very smooth function. So I have no problems in showing that this object is zero. And this is called, oh, plus the force. And this is called the energy flux denoted in some papers by capital pi sub j. So this is the energy flux through the shell 2 to the j. So now you may ask, what is the bound on this? And I'm not doing this in very chronological order. But it was proven in a paper of Konstantin Cheskidov, Friedlander and Schwittkoi in 2008, following an earlier idea of Konstantin E and Titi from 94, eink, 94 at the same time, Duchenne-Roubert. But what they proved is that pi j, and by the way, this has nothing to do with Euler. This bound I'm showing here is for any vector field, right? It's less than summation over i, bigger than one, 2 to the minus 2 thirds j minus i, 2 to the i, and here's the litterwood-Paley piece, delta i u l3 cubed. So delta i u is just s i plus 1 minus s i, okay? Just the usual. So now it's very clear. When does this go to zero? So you can ask, when does this, as j goes to infinity? That would correspond to energy conservation, right? The flux vanishing. And well, the answer is here. This, you recognize, of course, is the best of space, b 1 3rd 3 l3 in time. And if you just have a supremum over the litterwood-Paley pieces to be bounded, right? If you just have this, then you will get that the limit is finite. You will not get that it's zero. To actually get that it's zero, you need that this litterwood-Paley pieces actually go to zero. And this is the result, the main result of, just kill of Constantine alphabetically replaced. Yeah, sorry, I cannot spell. Just kill of Constantine Friedlanders-Schwittkoi. This is the result that if the weak solution of Euler belongs to this, then energy conservation. But this is really the reason why, because of this flux estimate. And the flux is really the important thing, because it's really what connects Onsager theory to Kolmogorov theory. That's the only reason I introduced it. Now, of course, this is the last in the sequence of papers. And I have mentioned Eing, 94, Dushon and Robert, Constantine E and Titi, who have slightly weaker assumptions. In particular, in this paper of Constantine E and Titi, you can do 1 3rd plus epsilon 3 infinity, right? So what is the Onsager conjecture about? It's basically about the sharpness of this bound. Is it true that for weak solutions, for which the little pearly piece is 1 3rd 3, don't go to 0? So for instance, 1 3rd 3 infinity. Is it true that energy is conserved or not? So we could state, this is definitely not how Onsager in 1949 stated the Onsager conjecture. And I think Camillo will discuss this subtle difference. So we can rephrase the Onsager 49 conjecture as it has a rigidity part. And the rigidity part is exactly this. If the weak solutions are as smooth or smoother, then they conserve energy. That's rigidity. So smoother, whatever that means, than L3 in time, best of 1 3rd 3 C0 implies rigidity. Just to be clear, it does not imply uniqueness. It implies energy conservation. And B, the opposite of rigidity is that that given space less smooth than that, x, which is whatever less smooth means, L3 D1 3rd 3 infinity, there exists weak solutions of 3D Euler in x, which do not conserve energy. So that can be stated as the Onsager conjecture. And there has been in the past years tremendous progress on this question. It is currently still open, but there has been tremendous object, especially by the program started by the Lelysynsek Eheidi. So I will mention that in a bit, but before I do that, let me erase Euler from the board and write Navier Stokes and say a few words about connections to the Kolmogorov 1941 theory of turbulence. So Navier Stokes in 3D, I should not have erased the equation. It's very important that we put in the force now as opposed to Euler. This is a theory about turbulence, about the long time behavior of solutions to this equation. And of course, if you remove energy, you have to put energy back. So think of F as a stationary in time object, stationary and smooth. It's supported at, let's say, supported in frequency for all we care about in B1, whatever. It's at low, medium scales. Okay, so if you check what the energy of Navier Stokes, what happens to it, d by dt of one half u nu L2 squared, well, you can actually do the same computation. So I actually should do Sj here, but let me not do it. You will get also the limit as j goes to infinity of pi j nu. And you will also get F dot u nu. And you will also get nu with a plus gradient u nu. Energy is definitely going down, thank you. Now as opposed to Euler, you can show for Navier Stokes that this goes to zero under much weaker assumptions. You cannot actually do it for Euler A half, it's not enough. You need a bit better than Euler A half. And I think Schimbrot has the best result. I really don't want to insist on this, but I think the best result is this, LpLq with one over p plus one over q less than a half and q bigger than four. This is due to Schimbrot in the 70s. So I think this is the weakest result, saying that this is zero. So in particular, if you wait for this solution to become statistically steady or at least to reach a place on the attractor, because you're putting it force, which is very smooth, the solution will inherit smoothness from the force. In particular, it will obey this, and therefore this is not here in Navier Stokes. So if you're looking for a solution which has lived forever, whatever statistically steady means, then in Navier Stokes you have d by dt of energy, energy dissipation rate, and of course what you put in. So what is the Kolmogorov theory about? So the Kolmogorov, the main assumption of this religion, is that if you call e nu to be the energy dissipation rate due to the dissipation due to the aplosion, will you take an average against what? Well, if you knew you had a unique ergodic invariant measure and a strong law of large numbers, it would be just the average against the unique ergodic invariant measure. If additionally you would have a strong law of large numbers, you could up posteriorly deduce that this is just a long-time average, let's say. So whatever this means, this is some average encoding statistical stationarity. If you want to think of long-time averages, that's okay. And the main assumption of Kolmogorov is that the lim inf nu goes to 0 of e nu is some number epsilon, which is strictly positive. So this is one of the fundamental assumptions of the Kolmogorov theory of homogeneous isotropic turbulence. Homogeneous meaning it happens everywhere. Isotropic meaning it happens in the same way in all directions. So in particular you should keep this in mind. It's not some kind of sparse thing. It's really filling everything up. Okay, so the question is, is epsilon related to Onsager? The answer is of course yes. And the relation is that, so you can show, you can do a formal argument on statistically stationary solutions, is that if you would have a statistically steady solution of Navier-Stokes and you could prove, you convert this weekly in L2 to a statistically stationary steady solution of Euler, then you could prove that actually epsilon, well, these are a lot of ifs. So the word prove with a lot of ifs. I'm not sure what the value of that is, but this should be exactly infinity. Thank you. So this is one of the main connections between the two theories. And I predict, assumed by Kolmogorov, is exactly what makes this not conserve energy. And again, this is derived typically in physics textbooks under the assumption of statistical stationarity and so on. Okay, are there other connections? And the answer is yes, of course, much finer in some sense, or as fine. And these have to do with structure functions. And this is of course related to the stock more than anything. So what are structure functions in this Kolmogorov theory? They're just, you take a vector L, you look at increments of the velocity field by L. If you want, in a lot of books, let's say in Frisch's book, you only see the longitudinal structure function when you look at things parallel to the vector field L. This is a scalar function. You can raise this to the power P, you can integrate this, and you can take a statistically stationary average, and you can call this number SP, whatever, new, off. So these are the structure functions, the P structure function of Kolmogorov. And what is observed experimentally, as far as I understand with tremendous error bars, so I will not spend too much on time on this, you observe that this is proportional exactly to minus 4 over 5. I hope I didn't screw this up. What does this mean uniformly as new goes to zero? 4L inverse in the inertial regime. L inverse is a frequency scale, and L inverse has to live between the frequency where you put in energy and the frequency where dissipation kills everything. So this is supposed to hold in this regime, and this is called the Kolmogorov four-fifths wall. So now, why did I put this on the board? You should look at this, and you should look at this space, and you should say, ah, it smells a lot like the same thing. You have power three, and you have to divide an L into three pieces. So if you divide by L, you exactly have a velocity increment with exactly L to the one-third raised to the power three, L to the one-third raised to the power three, and this is bounded, does not go to zero, becomes like epsilon. So this is a very, very deep connection between these two. I will not discuss the number four over five at all. Proving this, proving the on-segure conjecture are in some sense the same thing. Except, the on-segure conjecture is about Euler, and this is a statement about Navier-Stokes in the vanishing viscosity limit. So from that point of view, currently we have no hope on this, we have some hope on that. Now, in the theorem I will mention, the joint work with Tristan and Adair, we are actually not able to prove this. We are able to prove something about S2. So again, the Kolmogorov theory predicts that this should behave like a constant, epsilon to the two-thirds, L to the two-thirds. As far as I understand, experimentally there are deviations from this law. This is called intermittency. This is super robust. This is as far as I understand, not as robust. And this is, of course, exactly the same. If you translate this finite difference thing to Fourier, which you of course can, it's exactly to saying that the energy flux to the wave number k, which is exactly d by dt of the projection on less than k, UL2 square, sorry, behaves like epsilon to the two-thirds k to the minus five-thirds. So the famous five-thirds spectrum that we hear in all the physics books is the same as the second order structure function. And this is related to what we're going to talk about, because this has spatial integrability 2, not 3. So with this introduction in mind, let me state the theorem. Ah, I cannot state the theorem yet. No, sorry. I should tell you a little bit what is known. So there have been a lot of progress on part b of the on-sector conjecture. If I just tell you the theorem in absence of the context, it will not make any sense. Camillo, I'm sure, will review this and give a lot more details than I am, so I'm going to go on purpose a bit fast on this. There have been works of Schaeffer, 93, I think, Schniermann, 2000, where they prove that there are weak solutions of 2D Euler in the space L2, which do not conserve energy. There are many of them. Transfer the C from Schniermann to Schaeffer, actually. Thank you. And these constructions are in some sense wild. They seem out of the blue. And then the first time this was put in a rigorous framework, and this is the framework of convex integration, is in the work of Delelis. I should make sure not to spell this one, but this one is hard. Delelis is in Saqehidi in 2009, where they use convex integration to construct weak solutions of 2D Euler, which are bounded and do not conserve energy. And the beauty of this was not just, okay, you improve x, but finally put this into a context related to the Nash isometric embedding problem and so on. It doesn't seem out of the blue anymore. Now there is a serious obstruction at L infinity to go past it. There's a very serious obstruction because of high, high interactions. And Delelis and Saqehidi, a couple years later, were able to successfully use special solutions of 3D Euler, special stationary solutions called the Trami flows, to push this to C1 tenth minus in spacetime. And for a while this was the quote on record, and since this work there have been three more significant improvements. By using more structure of the equation, in particular using a slightly more carefully material derivatives, at the same time almost, I set and Buckmaster Delelis and Saqehidi, 13. So again, by keeping more of the transport structure of the equation, they have improved this to C1 fifth minus in spacetime. And a lot of, it seemed like this required a completely new idea to go forward, and indeed it does require a new idea. The Onsacre conjecture, the Kolmogolov spectra, are not about Euler spaces, they're about L3, L3 integrability. And with this realization, Buckmaster, in his PhD thesis, when is the thesis, one also at the same time, he constructed solutions which are C1 fifth, with the additional property that almost everywhere in time, they're C1 third. And almost every time. Now, this almost every time is not so nice, it's not an integrability, but this group of authors, so I will just draw an arrow, have improved this to L1 in time, C1 third minus. And to date, this is the world record, quote unquote. To achieve this, they had to basically explore the full power of time intermittency, if you want to make a connection with turbulence, and I'm sure Camilo will mention some of these works at least. So, what is the main theorem that I'm going to talk about? I'm going to put it on this board. And again, this is joint work with Tristan and Nader. There exist weak solutions of 3D Euler, which live in the space C in time, H1 third minus in X. By H, I mean the sobolef, the L2-based sobolef space, which do not conserve energy. Okay, so a couple of remarks are in order. First, this is precisely up to epsilon, of course. There's a minus here. This is exactly the Guamangorov spectrum. It's exactly this. Except it's for Euler, it's not for Navier-Stokes, right? It's a major difference. So, Guamangorov spectrum. The second and rather more curious remark is that if you compare this result with this one, and you freely use the power of interpolation, so again, so Buckmaster, Delelis, and Zekeli Hidi prove that one third of a derivative belongs to L infinity in L1 in time, L infinity in space. That's basically what they prove. What we prove is that the same thing, but we have L infinity in time and L2 in space. Now please interpolate these spaces. You get exactly R3 and R3. These are not just some random spaces. So if you just do interpolation, because it does not mean that there's any intersection between the two subsets of solutions, that's why I put a dotted line just to be clear. The two subsets, the infinitely many, uncountably many solutions constructed there and the uncountably many solutions that we construct may have the empty set in common, and they very likely do. So this dotted line will be, I guess, the scope of another paper. And then we can put the on-sector conjecture to rest. So I will try to sketch a little bit the proof in the remaining 20 minutes. So I need to erase some boards. I will not be able to fully compare our proof with the previous proofs because it would really take many hours to describe the previous proofs. But there's the advantage, of course, that Camilo will mention some of these previous proofs before. So I will freely point out some differences, which may seem a bit random at first. Okay, so the proof is based on the same idea of using a convex integration scheme. So what is the main idea of a convex integration scheme? You approximate the solution of 3D oiler with some vector field, whatever. You make a guess. So you make a guess vq. And you're making also an error, rq. So this is your guess. This is your error. And these will not solve oiler. They will instead solve the oiler Reynolds system. Ah, I don't want to write the pressure. So this is the array projector on divergence free functions. So you're not solving oiler. Solving oiler would mean putting 0 here. You're not able to do that because you made a guess. It's a random guess. So your goal now, the goal of the game, you know you have some information on this. And in particular, you know that this lives at certain frequency in Fourier. So this lives at frequency lambda q. Are you projecting both for the divergence or? I guess there's no need, right? Maybe I should have written the pressure. Because I think these errors will propagate. So I should write pq in the triple. And these approximate solutions that you have made leave us a certain frequency lambda q, which you should think of as a huge number to the q. They're not. They're actually super exponential. But for the purposes of a talk, I think it's fair to say we can think of them as being exponentially growing and q will grow to infinity. So what is your goal? You want to add a perturbation to this approximation. So you want to define vq plus 1 is vq plus wq plus 1 a perturbation in such a way that adding this object makes this error that you have made smaller. You're adding a correction at the higher frequency and you will therefore obtain a new solution, an approximate solution like that, which lives at the higher frequency. You'll see why. But hopefully this in a certain norm is much less than that is a certain norm. If you can continue to do this in the limit, you will have zero in that norm. Now if in this process you also manage to prove that the sequence of vqs remains bounded, then you will have a theorem. Otherwise, I mean you may correct it and then the vqs you make them so large that they don't converge to anything. So you have to do two things, decrease the Reynolds stress error and make sure you're not adding two wild things. So just for the purposes of notation, let's say beta is a number less than a third, beta is exactly whatever this minus means. It's the regularity that we get. And let's say the error we have made in L1, okay, you should notice something. This is an L2 and I'm all of a sudden writing L1. The Reynolds stress is quadratic in the velocity field. It's like v10 or v. So in some sense, you have to put it in L1. You cannot afford any other space. And let's say the error we have made has a certain size. And because of the regularity beta we want, let's say it has size lambda q to the minus 2 beta, which instead of writing this object, we're going to write delta q, plus 1. Okay, so our goal is that this one has size lambda q, delta q plus 2, which is smaller, right? Because this is a negative power, this increased, so this is smaller. So this is our, be our goal. That's how you can close the induction. By the way, I'm skipping a ton of details. Now, what is the equation obeyed once you have made this? And we can write this. So here you have the old Reynolds stress. With it you couple the interaction of wq plus 1 and wq plus 1, the interaction of this, and the pressure gradient. And then you have two more errors. Once when this material derivative will fall on your perturbation and once when your perturbation is stretching the previous vector field. Okay? So following the notation introduced by Camillo and Laszlo, we call this the oscillation error. You should think of this as high, high interactions in frequency. Nothing more, just high, high. This we call the transport error for obvious reasons. And this is the Nash error for reasons that, in the asymmetric embedding problem, you have a very similar term. So now the goal is to rewrite this whole right-hand side of the equation as divergence rq plus 1 and estimate the size. And hopefully it is of size delta q plus 2. So let me say that lambda q is approximately lambda zero q. So in the dyadp, so in distance 1, or how wide of a range you're allowed to be. So vq, because it is a sum of many pieces, lives in a ball of this size. Each of the corrections live in a shell, in a little bit palier. That's a subtle thing I will mention it in the proof. It shows up at the end. You don't think of the whole dyadp. You may have to cut it. Yes, you may have to cut it. In all these previous constructions in fact it's isolated spheres. It has no thickness. In this construction in fact one of the main ideas is that we have thickness. Which is basically related to saturating burst and inequalities, intermittency and so on. And it's not literally I think the dyadic shell. I mean wq plus 1 has a small piece which is actually even leaving with smaller frequencies. So it's much smaller. It's subtle basically, how you choose exactly. Yeah, it's a very subtle issue. We actually make it supported because we hit it with a projector. Yeah, we hit it on top with a projector. Just so that we can do it. You should write down the q plus 1. Where? Ah, here? Thank you, thank you. Okay. So, you should have already noticed something. If wq plus 1, tensor wq plus 1 has any hope of cancelling rq. Which in turn has that size, delta q plus 1. And you better have wq plus 1 in an infinity l2 is of size. Ah, I don't want to make this precise. Delta q plus 1 through one half. Okay? Because otherwise it cannot possibly cancel it. And it lives at the next frequency. Lives at lambda q plus 1 as Alex pointed out. In a shell in fact, whose thickness is rather subtle. Okay, so now I want to just point out very quickly one thing. Ah, very quickly. Because of this term, you can actually cannot go with beta more than a third. Because of the Nash error. And you can very quickly see this. You can just compute how big divergence inverse, whatever that means, of wq plus 1 dot gradient vq. Right? The size of this, to get the size of this you need to invert the divergence. Right? In l infinity l1, you can check what the sizes are. And you get lambda q over lambda q plus 1. This lambda q is because you have a derivative falling on the low frequency guy. This lambda q plus 1 inverse. It's because you're inverting a negative one operator of something which lives at the higher frequency. And you have the sizes of the objects. Which are just delta q plus 1 to the one half, delta q to the one half. Okay, and you can now check that this will be of the desired size. Okay? So it's a computation. You can very quickly do it if you assume geometric. It takes a while if it's super exponential. By a while I mean a couple minutes. Okay, so now what is the main idea? I didn't mention anything about the proof. So in the remaining 10 minutes, all these ideas, by the way, are in all the previous constructions. I didn't say anything new so far. So now I will say something's new. Why the previous constructions are getting stuck at that regularity, or at least one of the reasons, is that the transport error is very high. Okay. You're making high frequency perturbations. So the material derivative can hit these high frequency perturbations. That will completely kill you. I mean, you get one of those exponents. So what you do is then, instead of having stationary Beltrami flows, you let them flow with the velocity field of the previous, with vq. Okay, when you do that, certain property that you used, that's why you started with Beltrami flows, namely a certain linear algebra cancellation, gets destroyed because your oscillations are not exactly Beltrami. So you can let the transport equation run for some short time. How long time that is, you need to put in a time cut-off. When your material derivative hits that time cut-off, you make an error, which at first seems to have killed your scheme. Because it's as big of a size as what you started with, you have no improvement, you think you're done. You can play some games, and that's how you get here, by interpolating. But basically, this is the main enemy. So, what do we do, in fact? How do our corrections actually look like? So again, I will not write Liré projectors, and I will not write Littwood-Paley projectors, but they're here. Basically, this is a sum over frequencies of some amplitudes and some object Wc, which are built high-frequency oscillations. I will tell you exactly what those are. I will not tell you much about what they are. What is this index set? You're looking at the sphere of radius lambda q plus 1. This is an actual sphere. And in the previous constructions, this index set were a couple points. Always including the diametrally opposite one. A couple points. This is basically, I'm exaggerating, some coefficients, lambda q plus 1, c dot x, composed with the flow of the field. So this would be at time zero, but of course you let this run. And what happened in the previous constructions, when Wc interacted with W minus c, the phase cancels, you produce something low-frequency, and that low-frequency object is exactly what cancels this lower-frequency object. Okay, when you do that, that's this interaction. How about this interaction? This interaction is high-high and spits out high at the same frequency, lambda q plus 1. And that one you win because you're inverting the divergence. So here I want to emphasize it spits out something at very high frequency, lambda q plus 1. And you win by inverting the divergence. And this is why the oscillation error actually had a bit of room, apparently. Not once you optimize with the transport, of course. So what is our construction instead? Our vector, our object's W, let me start by doing the north and the south pole. There's finitely many of them. Think of them as rotated. They have, instead of a point in frequency, a vector, they have a Dirichlet kernel. So basically here at the north pole, at the south pole and at every other point, dots that were in the previous construction, we put a Dirichlet kernel. What do I mean by that? We really take this vector, c bar, which is of size lambda q plus 1. And we add to it an object which is really a 2D Dirichlet kernel. It has certain features, this Dirichlet kernel. It has certain separation between the two elements, the next consecutive elements. And it has certain number of elements. Okay, so separation and number of elements. Let's call this number R just so that we fix that thing. It changes in q. Sorry? It will depend on q. So the higher the q, the more objects you put in. Okay, what happened? What is the advantage of this? What is the norm of this new object that we are constructing? I guess it depends on c. C bar, which is the center. Well, like any Dirichlet kernel, when you compute the L1 norm, you lose a log in L2 square root of the number of elements. In L infinity is the number of elements. So what we're doing, keep in mind, we want this relation and we already had this relation for the coefficients. Right? Already this is of the correct size. So now if I do herder inequality and I make this of any size more than one, I will screw my correction. Doesn't exactly work like that. You can use almost orthogonality. This is much higher frequency than this. So when you estimate the L2 norm of this, it's like the product of the L2 norms, in fact. So if you don't want to destroy the size of your perturbation, you renormalize your Dirichlet-Beltrami kernel to have L2 norm 1. That's the correct rescaling if you want to maintain something like this. This almost orthogonality really comes if you have sufficient separation in your frequencies there. If you don't have enough separation, this will get screwed immediately. Okay. L2 norm 1, well, by rescaling, the L1 norm is now small. RQ, whatever. It's a tiny number. Before, when you just had one element, it was order 1. So now all of a sudden, when you're computing the transport error, and with this I think I have to very, very quickly finish, when you're now computing the transport error, these cannot fall on these Dirichlet-Beltrami waves because we flow them with the vector field of the previous correction. They can only fall on the corrections. Sorry? There's an extra V. Yes, there's an extra V. Thank you. Thank you. So this is... We have a room full of editors. Yes, yes. This is this, right? Right. It cannot fall here because it's transported. So it has to fall there. So now when you compute the L1 norm, this is what you have to compute, the L1 norm of the transport error. Again, you need to use almost orthogonality. This lives at much higher frequency if you have enough separation than this one. So when you compute the L1 norm, you get the L1 norm of this times the L1 norm of that. This will give you exactly the same things in the previous constructions, except you have now an RQ inverse now. You have improved your main enemy with exactly this object. Now, of course, this improvement does not come for free. Let me write... Let me blow up this Dirichlet kernel. Okay, so these guys help you when this interacts with exactly the south pole. You're good. You know, when this interacts with that, you're good because they always spit low. Okay, so one more minute. They spit low. But now, what happens if... There's no color. Okay, what happens if this one doesn't interact with the south pole but with this one? How high is this interaction? The answer is not very high. It's that high. So now what you... It seems like you're stuck, right? The error you made from high-high interactions is actually not high. It's not sufficiently low either. It's in some intermediate range. And this seems to screw you, and then it doesn't if you now include more Reynolds stress corrections. You come with another velocity field and you correct those interactions which are of medium size. Those which are of high size you can always control by inverting the divergence. And this is where the thickness of the shell comes in. The thickness of the shell is how high do you have to push the errors so that you win from the divergence? Okay, you optimize in that. And now we basically have a chain of Reynolds stress errors in every single step. Every single one correcting the medium errors you have made from these high-high interactions which don't spit high. The advantage is in every single step of this chain you have slightly decreased the amplitude and your separation is slightly increasing also. So you have fewer and fewer enemies. And we can prove that after finitely many steps independent of Q you stop and you achieve the desired bounds. And I'll finish here. Which one of those parameters depends on T? On T? Everybody. W is what I showed you is only the initial data. These are flowed by the velocity field. These have to depend on XT because they're correcting a Reynolds stress which itself depends on XT. Any chance of running two iterations at the same time? So to get one solution that you put in T? The two iterations at the same time maybe not. But to get to T maybe yes. But not by running them at the same time. It seems that this temporal intermittency that you guys built doesn't really gel with this spatial intermittency that we built. And there's some idea we don't know yet if it gets you all the way. It gets you a bit. But we don't know yet how far.