 Thank you for the opportunity to speak at this wonderful place. It's the first time for me. And thank you to the chairman who comes from one of the best universities in the world. So I'm going to talk about the Euler equations and the isometric embedding problem. So you already saw in Vlad's talk a lot about the Euler equation. So this is the incompressible Euler equations. And the question is whether the energy identity, the energy conservation, which is this identity here, is valid or not. And the Onsager's conjecture is the following. So yes, if you have the following condition, and no, there are solutions which don't have the energy conservation if the condition is weaker. OK, and you see that actually in this original conjecture, this constant here is a c, which is independent of x, y, and t. So in a sense, we could say here we have the one-third derivative in space of v, which is in L infinity in time and space. And here we have this guy, one-third minus epsilon in L infinity. OK, and then you heard in Vlad's talk all the results which are known so far about this conjecture. So yes, this is OK. So Konstantin Aetiti were actually the first to prove exactly the statement of Onsager after work of Aeink, and then there have been refinement. And then for the no, you have Schaeffer in 93 and Schnielemann in 98, and this is v in L2. Then you have Laszlo and myself in 2008, and you have v in L infinity. Then again, Laszlo and myself in 2011, and you have v in c0, then slightly after 1-tenth of a derivative of v in L infinity, then ez in 2013, which has 1-tenth of a derivative in L infinity, and there is an alternative proof by Tristan, Laszlo, and myself around the same time. And then there is again Tristan, Laszlo, and myself. And this time it's really 1-third of a derivative, but it's going to be in L1 in time L infinity in space. And then there is still Tristan, Nader, and Vlad, and they have the 1-third of a derivative, which is L infinity in space and L infinity in time and L2 in space. So these are the results so far. And Vlad discussed that probably the correct conjecture is that actually this over here is in L3 instead of being in L infinity, and that is the funny thing that actually these two results interpolate to L3, although we don't know how to interpolate them. OK, so as you see, it's a hell of a lot of exponents, and I want to give you a feeling about these exponents today, but I want to start with the origin of the methods that we are applying to at least of the ideas that we are using in order to prove these results. And that goes back to the isometric embedding problem and Nash. OK, so here you have a similar situation. So you have the isometric embedding problem. So this is a map which is isometrically embedding at the manual manifold in Rn. I mean, if you're not familiar with the problem is, for instance, what you can do with a piece of paper. This is the flat metric, and I can bend it, but I cannot tear it up, or I cannot really extend it because it's not elastic. So that's an isometric embedding. And well, for instance, if you are isometrically embedding a flat manifold, you actually know that if you are isometric embedding in C2, in R3, this has to be a ruled surface. A ruled surface means for every point there must be a line which is passing through it. OK, and so you see that, for instance, you cannot do anything better than a cylinder in a sense if you want to put your manifold in a very small space. But Nash, and in fact, in the way I'm going to state that you're here subsequently Cuiper, or Kerpa, as my PhD student told me, it should be said. So they proved in the 50s that every time that you have, so for every epsilon bigger than 0, and for every immersion V, which is short, which means that it satisfies this in the sense of quadratic forms. So nearby the V shorts that exists U, which is going from mg into R3, OK, into Rn in this case, which is an isometric embedding and which is uniformly close to your V. And so obviously, this contradicts the rigidity theorem because I could take, actually, for instance, a piece of paper, shrink it by decreasing length into a ball of reduced epsilon, and then by the theorem of Nash and Cuiper, approximate it again by staying in the epsilon neighborhood and then put isometrically my piece of paper in C1 with a C1 isometry in the ball of reduced to epsilon. OK, and that is obviously contradicting somehow the rigidity in a sense. So actually, what happens is that if you're considering positively curved surfaces, so it is actually known, so in this case, n is equal 3, and the dimension of the surface is equal 2, and g has positive Gauss curvature, it was actually proved in the 50s by Borisov that the embeddings are rigid. So you cannot, for instance, have a theorem like Nash-Cuiper if they are in C1 2 thirds plus epsilon. And we have a second proof of this together with Sergio Conti, also in 2011, which is much shorter than Borisov proof. And that's some connection with the proof of Konstantin and Titi of the yes to the Onsager conjecture. And similarly, there was actually a claim by Borisov in 1963, so that's what he announced. So the Nash-Cuiper phenomenon, so Nash-Cuiper theorem can be proved for C1-alpha metrics, so if, sorry, for C1-alpha maps U, if the metric g is analytic. And OK, the exponent that he actually announced in the case of two-dimensional surfaces in R3, so then I can give you the general exponent is 1 over 7. But in fact, he proved a lot later, so this was 2014, 1 over 13. So 1 over 7 actually really appeared in our paper together with Sergio and Laszlo. And we can actually remove analyticity of this metric, and I can maybe give you an idea why. He's assuming g analytic and what is the problem. And more recently in 2015, together with the PhD student of mine, Laszlo and myself actually improved the exponent to 1 over 5. OK, and this is up to date actually the best that you can do for two-dimensional surfaces in R3. There is a general theorem that you can prove for general surfaces in N plus 1 dimensional Euclidean space, but then the exponents are going to deteriorate. Very important, so the Nash-Cuiper theorem can be proved for C1-alpha metrics. I mean, these exponents over here are valid if you are manifold this topological trivial. So if it is a ball, somehow. OK, and the ambitious goal of this talk would be to give you an idea of where these exponents come from. And by the way, actually, if you're interested in the isometric embedding problem, Gromov conjectures that the threshold, which for the Onsager's conjecture is actually 1 over 3, should be 1 over 2. And that seems to be also what Borisov believed, that Borisov actually died. So he can't answer to this question. Anyhow, this really seems to be what he believed. OK, so I'm going to actually focus on the exponents but on the constructive side. OK, so the first thing that I want to show you is, so how actually could Nash prove that theorem over there? And for the sake of this talk, actually, let me assume that instead of going into any Euclidean space, you're going to a Euclidean space which is two dimension more. So let's say, big n is bigger or equal than m plus 2. So obviously, this does not serve the purpose of our problem over here, which would be for two dimensional surfaces in R3. But that's actually the improvement of Koipa. But the computations, then, they're sort of nastier and more difficult to explain. OK, so I want to actually produce my approximation, u, as a sequence of successive approximations. So I want to pass from uq to uq plus 1 to uq plus 2 and so on. OK, and how do I want to do this? Well, first of all, I will be a short map along the way. So I'm going to have this inequality all the time. And notice actually that in matrix notation, this inequality is just the fact that this symmetric matrix over here is less than g. And then I want to modify uq to a new uq plus 1 in such a way that this thing over here, so I want to get this guy, which is substantially decreased. So it's substantially smaller than this guy over here. OK, and how am I going actually to do this? So this is the interesting part. So I'm going to take h, which is my metric error. OK, and this is a positive definite matrix. I'm going to decompose it as a sum of coefficients. Let's call it a squared. And then rank 1 matrices, which are positive semi-definite. OK, so the fact that I can decompose this by having fixed vectors over here and here coefficients, which vary in x actually, is a simple exercise in linear algebra. OK, and now what I want to do is I want to modify my uq. I actually have to modify it in a finite number of steps. So what I want to do is I want to perturb uq to uq plus 1 by adding a perturbation. And each time that I add the perturbation, this product is going to look, I mean, this product compute for the perturb metric is going to look like the previous one, but I'm adding one of these guys, one of these portions. And I do this a finite number of time. And then I have something, which is uq plus 1, which is going to have, so the aim is essentially to have something like this up to an negligible error. OK, very good. So how am I actually doing this? So this is done by what are called nowadays Nash Spitals. So for instance, for the first perturbation, you take uq of x and you add the following precise formula. So you have 1 over lambda. And then you have B of x. I'm going to tell you in a second what it is. Then you have a1 of x cosine of lambda x dot e1. And then you're going to have 1 over lambda and then n of x and then a1 of x and then sine and then lambda x dot e1. OK, so now what are B and n? B and n are two orthogonal vectors, which are also orthonormal to my surface uq. So B is orthogonal to n and B and n are orthogonal to the tangent space to my image manifold. OK, so now what I want to show you is why this perturbation is going to work. And the reason why this perturbation is going to work is that when I compute the uq plus my perturbation, so what I get is the following. So it is the uq. Then you see that when the derivative hits the cosine, I get the lambda outside, which cancels the lambda. So I have something similar when here I hit the other guy. And then I have some error terms. I mean, these error terms of order 1 over lambda because the derivatives hit the other coefficients. I mean, they hit B or they hit A. So one typical example of an error which is over here is, for instance, something which looks like this. So the derivative of B, then I'm dividing by lambda, then I have a1, and then I have cosine. OK, so this is a typical error. OK, and now what happens is that since these guys are actually orthogonal to this guy over here, when you compute B of this guy transpose, so what you're going to have is that you have no mixed products, because they all cancel with each other. So this is the orthogonality of B with n and the orthogonality of n and B with du. OK, and then what you get is actually here e1 tensor e1. And here you would have a1 squared. And then you have e1, sorry, and cosine and sine squared. And this is sum to e1 tensor e1 times e1 squared, and then sine squared, cosine squared. OK, and you see that what happens is that, voila, I get my e1 tensor e1 times a1 squared. And I've added one of the summands that I wanted to add. Now I do this a finite number of times, and I'm happy. OK, so this is the basic iteration, and you can actually put the epsilon and deltas. And once I tell you actually this trick, essentially in a couple of minutes, you do, you repeat, or you can write Nash paper from 1954. I mean, it's all here. There's nothing deeper than this. OK, so how do you get actually from this construction to a c1 alpha construction? So how do you get an exponent? And let me show you in this computation how, while we get an exponent, this magic 1 over 3 of the Onsager conjecture will appear in a second. OK, so what is going to happen is actually the following. So what you will prove is that this guy will actually become a summable series, so that UQ is converging in c1. OK, but this guy most likely will blow up, while most likely it will blow up. You know it will blow up, because you have the rigidity theorem which is telling you you can't possibly prove that this is going to work in c2, this iteration mechanism. Because you see, the point is that since I have this 1 over lambda, which I can shoot very high, my perturbation in c0 can stay arbitrarily close to your initial map as you want. OK, so first of all, by doing like Vlad actually did yesterday, so let me call this guy, which is going to be small, let me call it actually delta q plus 1 to the 1 half. Now, if you look at our computation, this guy over here is essentially a1, the size of a1. And a1 squared is actually, in that sum, upstairs related to the metric error. So what you actually get is that if you make this ansatz, so this is essentially of size delta q plus 1. OK, now here, when I'm computing second derivatives, so the most important part is when I hit again with the derivative, with the second derivative, my fast oscillating term. So I get a lambda. OK, so let me just put this guy this way. OK, now what I'm expecting is that this is converging to 0 and this is actually blowing up. So let us assume that the blow up is exponential. And the convergence is exponential. So let me assume that delta q is given by lambda q to some power alpha, sorry, minus 2 alpha 0. And this is going to be equal to lambda 0 to the minus 2 alpha 0 q, OK? So therefore, lambda q is actually lambda 0 to the power q. So this is the ansatz. Now, if you do a simple interpolation estimate between these two guys, what you discover is that in c alpha, this is less or equal than delta q plus 1 to the 1 half times lambda q plus 1 to the alpha. And of course, now, if you look at what happens over here, you have minus alpha 0. So this is going to converge. This alpha is less than alpha 0. So this alpha 0 here is exactly the hold of threshold that you can achieve with your iteration. OK, so if that is what you can do now, you have to understand how you're going to kind of make this convergence of the delta q as fast as possible given what you're going to choose for the lambda q. And where is actually the point? So the point is that I have to choose lambda q large to make a certain error small. So that's the error that I have over there. So if I make that error small, then I have 1. So how small I have to make that error, actually? So I have to make that error small compared to the new delta q plus 2. So because that error is going to tell me how big is the new guy. So essentially, what I have in the computation upstairs is that dq plus 1 transpose times duq plus 1 minus g. So this is going to be as small as this big O of 1 over lambda. And I want to kind of quantify that. OK, now if I quantify that, you see I have an example of one error. Actually, the story would be much more complicated because they have many other errors. Let's see what happens with that error over there. OK, so I have a lambda. So that is a 1 over lambda q plus 1. Then I have a derivative of a vector that I've chosen, which is normal to my surface. Now, the vector is going to be as regular as my tangent space if I choose it smart. So the derivative of the vector is going to be like the second derivative of u. Now, the second derivative of uq is blowing up like an exponential. So the second derivative of uq is the sum of these guys. But when you're summing a geometric series, what you see is something which is comparable to the last guy that you've seen. So therefore here, I have delta q to the 1 half lambda q. But then you have the a1. And you remember, the a1 is small. It's small as the metric error that you had to kill. And that is delta q plus 1 to the 1 half. Now, if your iteration is consistent, this guy should better be actually as small as smaller than the next error that you want to achieve. So your condition is the following is that delta q to the 1 half, delta q plus 1 to the 1 half, lambda q, lambda q plus 1 to the minus 1 has to be less or equal than delta q plus 2. And actually, I mean, making it less or equal means to pick up this even bigger. So essentially, you can put it equal. That's the best that you can do. Excuse me. Just one thing. Why is it delta q plus 1 to the 1 half? Because it's a1. And a1 is not delta q to the 1 half. Or that's this one. That's this one, OK? So the size of the previous error gives you the size of the next perturbation. So there's a mismatch of plus 1. It took us like a couple of years before understanding this, actually. No, we were always using the wrong notation. I think Tristan is the first one that really pointed out the notation, which is consistent with the thing. Anyway, so now you can actually insert your ansatz, right? And compute the log and compute what is minus 2 alpha 0. I just insert it inside, OK? So over here, I have lambda 0 to the minus alpha 0 q plus 1. Then I have lambda 0 to the minus alpha 0 q, OK? Then I have lambda 0 to the q. Then I have lambda 0 to the minus q minus 1. And this has to be equal to lambda 0 to the minus alpha 0 to alpha 0 q plus 2. OK, now I take the log, which makes the lambda 0 disappear. OK? And now you notice that the q, the terms in q, they actually cancel, right? So this cancels with that. And sorry, this minus alpha 0 q minus alpha 0 q minus 2 alpha 0 q. And then here I have q minus q equals 0. OK, so let me get upstairs what I get. So I have minus alpha 0. Here I don't have anything. Here I don't have anything. Then I have minus 1. And then on this other side, I have minus 4 alpha 0 q. Sorry, minus 4 alpha 0. OK, so put the 4 alpha 0 on the other side and the 1 on the other side, 4 alpha 0 equal 1. And then 3 alpha 0 equal 1. And then alpha 0 equal 1 over 3. OK, so this basic computation we did in 2010. And from 2010, we just thought that, OK, this might explain actually at an analytic level why you have the Onsaker's conjecture. So now why actually here you are degenerating? Well, you are degenerating because I imagined I only have to add one metric in the perturbation. But actually, I have to have many, right? So I just made the computation with one error. But then I have to add the next error. And the next error is going to have a faster oscillation compared to before. But I have to make faster and faster oscillations, but the metric error is not improving. OK, what actually happens if you make all the computation which I'm not going to give you is that, essentially, the alpha 0 that you get is 1 over 1 plus 2 the number of steps that you have to do, this n star, OK? So why, for instance, then do you have 1 over 7 over here? It's because the space of symmetric matrices is three-dimensional. And so if I want to write my symmetric matrix as the sum of rank one matrices, I mean, the rank one matrices is a linear generator of the space of symmetric matrices, but I need at least three, OK? And then I have 1 divided by 1 plus 2 times 3, and that gives me 7. And similar numbers you can actually crank out for all possible dimensions. So why in the hell can I actually improve to 1 over 5? And that's because I can use differential geometry and by making a change of variables which is conformal at each step of the iteration, I can actually diagonalize the metric. When I diagonalize the metric, I actually need only two rank one matrices to write a matrix in diagonal form. So I only need two steps. And when I need two steps and I plug in this formula, I get the 1 over 5. I'll ask you something. Taking n larger, would it improve at this step? Because you would have more space to play on the n star, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, the n star is actually going to kill me because I have to add many more oscillations, unless you understand how to plug them all together, which we have not. But the n star is uniformly bounded. Right, n star is uniformly bounded, right, right. So for instance, in the case of n dimensional surfaces, if it is topologically trivial, we will be able to make this n star equal to the dimension of the space of symmetric matrices, which is n times n plus 1 divided by 2. And this keeps you the 1 over 7 for two dimensional surfaces. But then my question was, if you take instead of 2, you take a higher dimension. Of course. For the target space, it's worse. Yeah, because this is the formula, right? You increase n star, this is the formula. The less you put in there, the better it is. With this iteration. OK, so now it looks actually kind of funny, right? Because you see that the world record without going to different spaces is still 1 over 5 over here. The funny thing is that that 1 over 5 does not have anything to do with the other 1 over 5. But since I've used essentially 20 minutes, well, 30 minutes of my talk, I will not be able to tell you that 1 over 5. So hopefully I will be able to tell you these two guys. OK, so let me give you a fake proof of the Onsager conjecture. So let me try to set up the same iteration mechanism for, yeah. But over there, you said that the conjecture is that 1 half is actually half. Yeah. So somehow there's something. This is only going to 1 over 3. So either you believe Gromov, and then this is not really reaching. Or you believe that there's a reason why 1 over 3 should be critical, even in this case. So here we believe Gromov, right? In Ayase S, I guess. Sorry? There are some explanations for the 1 over 2, yeah. From the rigidity part, though. And it would take me, well, I would go extra time. If I have to explain you why 1 over 2 is actually interesting. I mean, 2 over 3 also, over there, is not a casual. I mean, it's not just a random number. So 2 over 3, 1 over 2, and 1 over 3, they all have their own kind of internal reason. OK, so how am I going to construct the solutions of the order equation? So you heard blood. So it's an iteration. And the iteration is going to look like this. So that's already was shown by blood. So this now is a 3 by 3 symmetric matrix. So how the L is actually the connection of this to the isometric embedding problem. So how do we come up with an idea like this? So you can think about it in the following way. So what is a short map? So a short map is something in which you have the inequality instead of having the equality. So if I give you a sequence of isometric embeddings, and you don't have any estimate on them, OK? What is going to happen? It's going to happen that you have a sequence of manifolds which preserve the length of curves. But if you have a sequence of curves with the same length, which is converging to another curve, the curve in the limit has less length, which is less or equal. So you can interpret this short map, this inequality diu dot dju, as a relaxation of the isometric embedding problem. So how can you interpret this? You can interpret this by saying, put this equal to 0. And assume I give you a sequence of solutions of the Euler equations, which is converging weekly, but not strongly. If it is converging only weekly, I mean, if you only have a bound in L2 and you don't know that you're converging strongly, you can take the limit of vq tensor vq, OK? And you will see that this will most likely drop below the limit of vq tensorized with the limit of vq, OK? So it's a convexity inequality, as much as that one over there is a convexity inequality. So this is, in some sense, a relaxation of the problem you started with. OK, and now what I want to do is I want to play a similar game. So I want to start from vq, pq, r circle q, and I want to generate a vq plus 1, pq plus 1, r circle q plus 1. OK, so how am I going to do that? So first of all, I notice the following. So this is a problem which I can always solve. I mean, you give me a vector, and I want a symmetric matrix whose divergence is equal to that vector. That I can always solve. It's an elliptic problem. So as I can always solve the divergence of a vector field is equal to a function, right? So I solve the Laplacian, for instance. So that I can solve. So I can solve it more than one way, though. So I'm going to identify, I mean, I'm going to denote by divergence to the minus 1 some operator which is inverting this guy. And if you look at this operator in Fourier space, typically this operator is going to have order minus 1, right? So if I want, for instance, divergence of minus 1 of something like e to the lambda k dot x, OK? Then I will have something like 1 over modules of k squared and k e to the i lambda k dot x. And here I divide by lambda, right? So there is a 1 over lambda which is coming out. So this is a Fourier operator of order minus 1. OK, so I can always invert this guy. So I actually now want to produce a new map. So this is going to be vq plus 1. This is the perturbation that I'm adding. OK, so here the perturbation I'm not going to give a name. And then I declare this to be the next guy because I solve the operator divergence to the minus 1. So the whole point is that I want to choose w and I want to choose the perturbation for the pressure in such a way that this guy is much smaller than before. OK? So as before, so you see the analogy, the C0 norm of w, which is vq plus 1 minus vq, is going to be estimated by delta q plus 1 to the 1 half. And since I will actually add the fast oscillating term, the C1 norm will also have an estimate of this type. OK, so how actually am I going to construct this w? OK, so this w is going to look like this. So I'm going to make an ansatz. So I'm going to say that this w is going to look like a function big w of vq rq. So these are slow variables. And then I have fast oscillations. OK, so for instance, if you were in the ansatz upstairs, a1, which is a function which depends on the metric error, will be somehow the dependence in these variables. And the oscillation that I have, so the cosine of x dot e1, would be in these variables. OK, and something similar I do for the pressure. So pq plus 1 minus pq is going to be some big function p. And then here I have vq rq rq plus 1 x and rq plus 1, lambda q plus 1 t. OK, so now what are the set of conditions which in this context would make the Nash scheme work? So I'm just going to write them down. And it took quite some time to understand why these are the set of conditions. Because it took us quite some time to set up something which would resemble, like Nash. But then I will show you how this set of condition is actually very natural if you want to set up the iteration in this way. OK, so the set of conditions are the following. So first of all, OK, so my function w now is going to be a function of v, r, psi, and tau. And the first thing that I want to solve is this guy. So this is the PDE part. And then I have a second part. So since I want to add oscillations, the function should actually be periodic in psi. And now I'm going to use the bracket for the average in psi of that function. So the average of this should be 0. And the average of this minus its trace should actually be the traceless part of r. I didn't tell you. But the tensor over there, I actually take it traceless. So why these conditions are the conditions which will ensure that I can run the iteration? OK, so let us look at, for instance, one first thing. So first of all, my w is not necessarily divergence free. And I want to actually add a perturbation which is divergence free. In fact, the real perturbation that I'm adding is this one, which will not be divergence free. And then I add something which is divergence free. OK, and now what I claim is that if I take this condition and that condition, d together, so the divergence of w is going to be very small. And I have to correct it with something which is small. So wc is small. OK, and what is that? Well, because I can expand w in Fourier series in e to the i psi. So this would look like something like 1 over k. And then I will have coefficients ck, which will depend on v and r. And then I have e to the i lambda q plus 1 k dot x. So the condition that the average of w is equal to 0, it tells me that c0 is equal to 0. And the condition that the psi divergence of w is equal to 0, it tells me that when I am computing this guy, I'm not hitting the fast oscillating exponentials. I'm hitting only this guy, so I get this. OK? So now if I want to invert the divergence from here and I want to find the new vector field which has this divergence, since it is fast oscillating, I gain 1 over lambda q plus 1 when I invert the operator. And that will give me a correction wc, which is very small. OK, so that's actually the easy part of the, I mean, this is the thing which is easy to figure out. So let us go and look actually at what happens to the most complicated part, which is the part upstairs. So from now on, I will actually forget that this guy exists. OK, so you see I know that dt vq plus divergence vq tends to vq plus gradient pq pq is equal r circle q. So I can subtract the equation for vq from the equation for vq plus 1. And what I get is the following. So I get dt w plus vq dot grad w. OK, then I get plus divergence of w, hence of w, plus grad of pq plus 1 minus pq. So you see that somehow here I'm expanding the product over there. So I've taken vq dot grad w, but I have w dot grad vq. So this is still missing. And then I have minus what I'm subtracting, which is the thing upstairs, which is the divergence of r circle q. OK, now what happens? So now let me take this guy out. This guy is going to be called Nash error. And I'm going to discuss it in a moment. And let me actually couple all of these guys together. OK, now when I'm plugging my ansatz, and my ansatz is that the perturbation has that form. So when I apply this operator, I'm applying it to the fast variables, and I'm applying to the slow variables. OK, and the condition that I have over here exactly tells you that when I'm applying it to the fast variable, the operator is equal to 0. OK, so that's exactly the condition that I have over here. So this will only be applied, so this is only slow derivatives of my function big w and big p. Slow derivative, what it means, it means that I'm hitting all the derivatives not on the lambda q plus 1x lambda q plus 1t, but on the big w. So now if I'm hitting the slow, I mean if I'm hitting the slow derivative, what I essentially can do is I can take my expressions over here, all the expressions that appear. And then I can expand it in Fourier series of psi. And then I can plug in for psi the lambda q plus 1x. So I'm doing exactly the same trick that I've done over here. And the reason why I was to gain the lambda q plus 1 was that the 0 coefficient of the Fourier series was equal to 0. OK, so now let me make the computations. So by this notation slow, it means I'm computing the derivative only on the entries of big w, where I have vq and rq. So here I have this law big w plus vq dot grad slow big w. And then I have divergence slow big w tensor w minus r circle q. OK, what does it mean this big writing? This big writing means that if I expand everything, so for instance, if I expand everything here, this w is a series. So I will have the e lambda q plus 1x, which is not touched. And then I have the time derivatives which are sitting my coefficients ck. And when I'm actually expanding in Fourier series this w, I know that c0 is equal to 0. That's my condition that the average of w in the psi variable is equal to 0. So I'm in good business over here. Here I'm also in good business, because you see that big w enters linearly. So again, if the average of w in the psi variable is equal to 0, then I'm fine. But here I'm not in good business. And the reason why I'm not in good business is because I have a resonant term. I have a quadratic term. So if I have a quadratic term now, what I have to do is I have to expand big w tensor w as a Fourier series of psi and then set up the condition that this guy is equal to 0. Now here there is no fast variable. So what actually the condition is is that this w tensor w has to cancel this r circle q. And that is the condition that we had here. And actually it's a little bit fake, because here I have a trace-free matrix. And here my matrix is not trace-free. Well, what actually will happen is that since here r circle q is, I mean, since here somehow there is a divergence, what I can do is I can actually add a constant, which is independent of x, but is depending on time. And I can make actually this r. So in fact, the condition is not really this one. It's more something like this is r circle plus some constant function, which I'm going to call e of t times the identity. So this is the real condition. And this is kind of the increment of the energy if you want. OK, so assuming I can do this, I can invert my operator divergence to the minus 1 and gain a lambda. OK, so if you were able to do that, of course, you would be still with an error over here. And this is the error which looks like in the Nash iteration. OK, so let me therefore handle that term over there. So first of all, you see from these ansatz that since I'm asking that the average of w tends to w is equal to this r circle, so what actually happens is that, OK, so the c0 norm of w, so this was the c0 norm of q plus 1 minus vq. So this is, we said, of the order delta q plus 1 to the 1 half. So to be compatible with this restriction, we find exactly what we had for the Nash error, that this error r circle q actually has to be of the form delta q plus 1. And very good. Now, if I have these ansatz, what is actually going to happen is that my Nash error, which is computed on this guy, so my Nash error gives you the following. So I have to invert the operator divergence to the minus 1. So the divergence to the minus 1 gives me the 1 over lambda q plus 1. And then I have to compute the c0 norm of this guy. And the c0 norm of this guy is the delta q plus 1 to the 1 half, which is coming over here. And then I have the gradient of vq. And if the vq is converging with the speed which is delta q to the 1 half, the gradient is just getting the previous oscillation. And now, if you remember what we had on Nash, this is exactly the same error that we had. So if this would work, actually, it gives you the ansagar conjecture, this 1 over 3. So what is the problem? So why can't we actually solve the ansagar conjecture by this method? And the reason is essentially here. So it is possible, and this was in some sense hidden in Vladstock, it is possible to make this guy small, but not to make it exactly equal to 0. So if you're going to look for a function big w, which is solving these guys over here, we are not able to say that it exists. We don't know. Maybe it exists, but it seems very unlikely. So what you can actually do is you can make this one small by adding an extra parameter. And then by adding this extra parameter, you can try to optimize with the other parameter, lambda q plus 1. And then you will have an error term, which unfortunately brings you away from this 1 over 3 regime. And I mean, if you're just crude, it brings you away to this 1 over 10. If you are less crude, it brings you away by this 1 over 5 somehow. But OK, so I guess I was just too ambitious if I wanted to give you some ideas about this 1 over 5 or 1 over 10. I should have done too much debates. Thank you very much. It's likely that there's a solution to that. Yeah, OK. So formally, it's unlikely that you have a solution to this, because in some sense, this guy appears over here quadratically, and this guy does not. So it doesn't look kind of, I mean, you could think, OK. So what happens if r-circle q is very small, but v is not? Then if r-circle q is very small, but v is not, you're converging to a regime in which the transport term actually seems to win, because it's linear. And it's way bigger than the other guy. So you can try to make it small, because after all, w times rw is not exactly equal to 0. And this is what happens somehow in our iterations. But to solve it exactly seems impossible. You're looking for a periodic solution? Even that one, of course, it's not necessarily so. So a quasi-periodic solution actually would still be OK. So the point is that I want to stick in a lambda q plus 1 and still stay bounded, and not only stay bounded, but stay bounded with all derivatives. So if I had a solution, which instead of being periodic, it would be uniformly bounded with all the derivatives uniformly bounded. And a lot of the times when we have these epsilon, we actually end up computing a large number of derivative of this w, which is growing up as we get closer to the threshold. So of course, that one would be good enough as well. So a quasi-periodic solution would still perfectly decent. I'm trying to hit the phrase H principle, but I don't think it is. OK. So here's the H principle. This theorem tells you that you can actually approximate any relaxation of the problem with an actual solution. OK. So this is one form of the H principle, actually. So the H principle of Gromov also has a kind of path connecting the solution to the sub-solution. OK. I don't know what the H principle is at all. Right. So the H principle is something like this. So you have a system of PDE. I mean, this is like the point of view of an analyst. OK. So you have a system of PDEs. I mean, for the system of PDEs, you have the following principle, which holds. If you have a solution of the relaxed problem, you can approximate it up to any epsilon with an actual solution. So that is a form of the H principle, if you want. OK. So it's kind of telling you that although you have a system of PDEs, which should kind of give you some rigidity, it's actually behaving more like an inequality than a true PDE, right? So if you have a solution to the PDE inequality, nearby there is a true solution of the PDE constraint, of the PDE equality. OK. So that is a form of the H principle. Now, OK, none of these papers really prove an H principle, but there is a recent paper by Laszlo and Sara Daneri, which proves also an H principle for this guy. Now, what would be an H principle in this context? An H principle on this context would be something like this. So I told you, I didn't tell you, I told you how I do the iteration, right? But I didn't tell you from which point I start. So here, essentially, the Nash-Kuipert theorem is telling you start from a point in the relaxation, and you can run the iteration. So the paper by, I mean, in there, in all these iterations, we are starting from a trivial point. We are starting from 0, 0, 0, and then we run the iteration. And 0, 0, 0 is in the relaxation somehow. So in there, the paper by Sara and Laszlo have a statement which characterizes the point from which you can start. OK, and it tells you, essentially, that are all the points which satisfy a certain inequality. So that's the H principle. Further question? Probably. So to what kind of evolution equations can this be applied? So first time we actually prove this, I would have said, well, only this. But actually, so true. So Philly's et and Vlad have actually applied it to all, for instance, to all active scalar equations which satisfy a certain condition on the singular integral kernel. And the condition is the condition which tells you that there is essentially no compactness, no compactness hidden inside. So you see that this relaxation problem, I mean, this relaxation is really working because you have some sort of lack of compactness. So a posteriori, the theorem, the H principle, is telling you there is no compactness for your sequence of solutions. Otherwise, you wouldn't be able to approximate so well something which is in the relaxation. So a posteriori tells you that. So a priori, you could sort of dream, OK, if I have a lack of compactness, then I can apply it. But of course, the situation is very subtle because you see if I start, OK, I actually do have compactness in here for the rigidity problem. So if I have a sequence of solutions which are smooth and they are solving, for instance, being an isometric embedding problem for a positively curved surface, then the Gauss curvature is positive, then you have convex, and then you have an extra estimate. And so actually, your space of solutions is compact. So the lack of compactness is only when you go below a certain threshold and none above. So it's somehow, in these cases, it's all very interesting because so what is this threshold lying? And we don't have a good guess for where this threshold is lying. I mean, for the Onsager's conjecture, we have the Onsager's conjecture. So we have the T-rebulance, which is giving you a guess. And for instance, here, it's much less clear because that is no clear intuitive picture.