 Thank you all for coming back. The fact that you're here means the first liquor was not so bad, so I'm happy to see you back. Alright, so where I stopped last time I mentioned that Camilo and Laszlo constructed bounded weak solutions of Euler which can attain, in fact, an energy profile. So they constructed weak solutions of 3D Euler which are bounded and wild. And I will mention what this word wild refers to. So I want to basically sketch the proof that they did. Not really the proof, but more like the fundamental ideas behind the proof and then as we discussed last time I'll do a toy example instead of the actual proof. So the story starts with this concept of renal stress and this idea is very simple. You want to basically simulate Euler on your computer. So what you do is you do maybe a caloric entrancation because you can store finitely many modes or some kind of averaging operator. So you go from V to V bar which is averaging Fourier projection, some kind of procedure. And then you ask yourself, well, once you've done this, what equation does V bar obey? Okay, your averaging will commute with time derivatives and if it's a nice averaging like a modifier or equivalently a Fourier projection, it will also commute with space derivatives, but it won't commute with nonlinearities. So the equation you will get here is this. I'm going to put it here, although I'm not supposed to, plus the averaged pressure equals minus the divergence of a stress. And this stress is exactly the commutator between the averaged product and the product of the averages. And of course, because your averaging will commute with derivatives, you will still maintain incompressibility. By the way, this is the same as V minus V bar, tensor V minus V bar averaged. So this Reynolds stress in the sense of quadratic forms or matrices is a positive definite thing, or at least it's non-negative. The other important thing to note this is that you cannot compute R bar from V bar and P bar. If you think of a Fourier projection, you don't have access to the modes that you've filtered out. So in practice, people model this and this is eddy viscosity and this is actually used in simulations of slightly viscous flows. And this R, which is this positive definite symmetric matrix, is what is called the Reynolds stress. So what Camillo and Laszlo realized was that instead of working with this formulation, so one equation, second equation and constraint, R bar is positive, this equation is nonlinear. It's better to turn this into a system of equations, which are completely linear and which have a nonlinear constraint. And again, the idea comes back from these old convex integration techniques used in differential geometry for differential inclusions. So what they did instead is they said, well, this, these three things are actually equivalent to saying that I'm going to solve dt V bar plus the divergence of some matrix, let's call it U bar, it's a very poor choice of letters, plus gradient of some other pressure, Q bar, is equal to zero. This is a linear equation for an incompressible vector field. U bar is some symmetric matrix. So it's a linear equation, but it's in three things. Okay, this is not really, again, if you apply the divergence to this equation, you actually determine that Q bar is the tensor of Re's transforms applied to U bar. But you need something to recover your old condition that the Reynolds stress was positive. And that's a constraint between U bar and V bar, tensor V bar. And it turns out that this is the constraint that this matrix is less than a multiple of the identity. Identity times some function of time, and this is going to be called the kinetic energy. So let me just write down the formulas for these things. So how do you go from here to here? What would be the correct kinetic energy here? You would just take the trace, right? So E bar should not be nothing but one half the trace of V bar, tensor V bar plus R bar. So I'm just giving you the mapping how to go from here to here. So this is who E bar is. Then U bar is this, a symmetric matrix. And this is exactly done, of course, such that this constraint will be true. And the new pressure is the old pressure plus kinetic energy. Okay, this two is related to this one half. This three is related to the fact that the trace of the identity matrix is three. Okay, they don't have special meaning. Okay, this should be in general dimension. And now you see that when you check this condition, you put this minus this, you get two thirds times this matrix minus R bar on the other side. And the fact that R bar is positive is exactly that condition. So this is called the relaxed Euler equation because we have somehow forgotten the fact that this is a nonlinear PDE. And instead we have a nonlinear inequality. And this is called a subsolution. So let me define the notion of subsolution. So given an energy density E, and it should be positive, we call V bar, Q bar, V bar a subsolution of 3D Euler if this system holds. So in what sense does it hold? This is a PDE, it has many derivatives, but it's written in divergence form. Every single thing is a divergence there, whether space or time. So it should hold in a sense of distributions. So the first two equations should hold in a sense of distributions, and the constraint should hold almost everywhere. Okay? And of course you want some regularity. So the minimal regularity needed would be that this is L2 log, locally L2. U bar should be at least locally L1 so that you can give meaning to this almost everywhere, and the pressure should be at least some distribution so that you can test it. So that's a subsolution. You call the subsolution strict if this inequality isn't strict inequality. So here we would add the adjective strict if the strict inequality holds true. Now what's the good thing about subsolutions? You can easily find subsolutions. There's so many of them that it's hard to even describe them. You give me almost any E bar, I will be able to give you a V bar and a U bar such that I have an Euler subsolution. And I'll show you an example later on. So there are many of them. Second, when that constraint becomes an equality, all of a sudden I have a weak solution of 3D Euler. Why? Because if that's an equality, then I will use this very big stick. Then this is just V bar, tensor V bar, which is what Euler should have, plus the divergence of a function times the identity matrix, and that's a new pressure term. So this actually becomes a true weak solution of Euler. And lastly, you get for free, for any subsolution, the fact that the kinetic energy density of your subsolution is less than E, for free. And how do you see this? Take the trace. So what's the theorem that the Lillian-Sequilichidi actually proved? Their theorem from 2009. I'll write also 2010, because besides their annals paper, they have a paper, another paper, I think, in ARMA, which does something a bit more. So let me write down what their theorem actually is. It says that given any continuous... Sorry to interrupt, but I don't understand how you get this inequality by taking the trace. What do you do with U bar? It's traceless. Oh, I did not say this. You did not say it. U bar is traceless, okay. I wrote it, but I didn't say it. No, I did not... Okay, I wrote it in my notes, but I didn't say it. In the definition of subsolution. Yes. I'm sorry. That's why it follows. Okay, so returning to the result, given any continuous energy density, which is positive, and given any smooth strict subsolution, V bar, Q bar, U bar. So you give me some kind of smooth subsolution. I think in their original theorem, it's stated with C infinity smooth, but it's not required actually. I think you can do C1. So given any subsolution, I can find a true solution. I can complete it somehow to a true solution. There exists infinitely many weak solutions, Vp of 3D Euler, such that the velocity is continuous in time with values in L2, but more importantly, it's bounded in space and time. The pressure is given by Q bar minus 2 thirds E, and V has prescribed energy density, almost everywhere. And in fact, they can also prove that all these things, these infinitely many are within the epsilon ball of that thing in the L2 topology. Okay, so I could have added another parameter given any epsilon. I wanted to stress this part instead because this really says, so this is a continuous space in time, right? That if you can give me a function of x and t, which is positive, I will find you a weak solution which doesn't have just the total kinetic energy, meaning integrated over space, but the energy density, so at x t, is prescribed. And this is very brutal, very brutal kind of non-uniqueness. And it's not just non-uniqueness, of course, if you make E to be, so if the average is non-decreasing, then you have dissipative solutions, which is what you wanted. If E has compact support, so does V. So out of this theorem, you can get for free all those many kinds of non-uniqueness that we discussed last time. And the thing which is not written on the board about the fact that in the L2 metric, you can find all of these guys in the epsilon ball of your sub-solution also proves H principles. It proves a sort of density argument. It basically shows that the set of weak solutions, not only does it exist, it actually dents in the set of sub-solutions. And since there are so many sub-solutions, this is a very brutal type of non-uniqueness and this is Gromov's H principles. Non-increasing, I'm sorry. Maybe I should have just erased the non. Okay, and so what's the strategy of the proof? The strategy is this convex integration, somehow the kind that was used in the theory of differential inclusions and in particular, they adapt a technology introduced by Müller and Schwerach who construct minimizers of strictly elliptic variational functionals, which are Lipschitz but nowhere C1. Okay, so this is a type of very rough kind of minimizer. It's not a minimizer, it's a critical point, sorry, for some kind of elliptic variational problem, which in turn goes back to works of Tartar, works of Diperna, et cetera. So instead of giving you the proof of this statement, let me just say how you go from a sub-solution to a solution for the toy problem that we discussed last time. So the toy problem is we want to look at the set x0, which is all functions u from 0, 1 into r, such that u of x is strictly less than 1, almost everywhere. This x0 will be our sub-solution because I put strict. And I want to consider its closure in L2 and I want somehow some kind of non-linear functional to tell me how I have gone from a sub-solution to a solution. So it's convenient to introduce a functional i of u defined on L2 on the closure of x0. It's just the integral from 0 to 1, 1 minus u squared. Again, this toy example is actually done in Camillo and Laszlo's paper. And I think it's really a beautiful example that they wrote. So what's the point of this i? Is that the objects I want, namely bounded functions, which solve u in absolute value is 1, is nothing but zeros of i. So the theorem is that the set x0, x0, x0, x0, u in x, such that i of u is 0, is dense in x0. Well, it's not just dense. I'm not a big fan of bare category. But it's of that kind. And of course, these are the functions we were looking for. So you see, this is the same type of strategy. You have a very well-defined notion of sub-solution. So let's say I start with one half sine x. And from one half sine x, I can somehow build very close by a true solution. In what sense close in the L2 topology? x. I mean x0. Around every sub-solution, I'll find the true solution. Yes, but when I said that something is dense into something else, usually I suggest that the first thing is included in the other one. Okay. First, okay. What I really just... Density means this. I mean, an object which is dense into something else, usually it has to be a subset. Okay. Okay. That's why... But that's probably exactly the same. And then I should write this. What I really meant to say is that for any element in here and for any epsilon, I'll find an element in here whose distance in L2 is less than epsilon. So you're absolutely correct. This is the proper way of writing it. And the proof... So what is this convex integration? It's really simple. It's an iteration scheme. So you start with some u0. Again, one half sine, whatever. And then you inductively define sequence of iterates, u k plus 1 of x is the old function plus... And let me first write it informally as amplitude, which is slow, times fast plane wave. So what do I mean? Let's be a bit more precise. So in this case, you can take one half, one minus u k of x squared. So this is my amplitude. And I call it slow because you see u k there, the old iterate, times what's a fast plane wave. Okay. Normally I would use complex explanations, but just write sine of some frequency. And to emphasize that it's fast, let me call it lambda k plus 1. So objects which have the index plus 1 are fast and objects which have the index k are slow. And I didn't tell you who lambda k plus 1 is, where lambda k is a sequence that diverges to infinity is chosen as part of the construction. I can tell you it's going to go very fast. Exponentials will do. Okay. And that's it. I mean, this is the toy example. Let's just prove, at least, aspects of that theorem. Okay. The function a plus a half, one minus a squared, or, okay, let's take absolute values. I really just mean to take absolute values. So then I have a plus a half, one minus a squared, and the absolute value of sine is less than one. So this is less than one for a in zero one. So that means that this iteration will keep you in x zero. Next, once you know that the iteration keeps you in x zero, let's try to prove that somehow there's a contraction happening. And to prove that there's a construction, this is why we have this beautiful positive linear, it's not linear, functional. By the way, is it clear that this is continuous on L2? It's clear, right? That's going to be actually important. So let's estimate i of u k plus 1. Well, I just plug in the formula. Okay. And so that I don't embarrass myself in public more than I've already done. Let me write what you get. Excuse me, by brackets, you mean absolute values in i. Excuse me? No, no. It's always more than one. At the beginning. But I mean, in the theorem, when you say that i of u is zero, you could have anything. But you're on x. But it's a closure of x. So it's a closure of that. And in particular, for all the functions that we're applying it to, it's going to be good. All right. So this is this, then you have minus 1 over 8, integral zero to one. Okay. Some something. And then you have two terms, which are of a very different nature. So why did I call them of a very different nature? It's because this only has u k in it. So it's slow. This also has just u k in it. So it's also slow. But this has a product of something slow and something fast. So now sine of lambda x, as you send lambda to infinity, weakly converges to zero. So if you keep this fixed, weak times strong will go. So this integral will actually go to zero when you send this to infinity. So in particular, if I give you an epsilon, I will find a sufficiently large lambda k plus one, such that this is less than epsilon. Same for this one. So if I give you u k and I give you an epsilon, I will find a sufficiently large lambda k plus one. There's a question, I think. No, no. So is there a square in the definition of I of u? Or why is there a square? Where? There. There? There is a square there. And then you have another square there. So you see the point is that this will go to zero and this will go to zero if I choose lambda k suitably. And this is why I wrote lambda k is chosen as part of the proof. So in particular, you can do Cauchy-Schwarz backwards. So this dominates an eighth of I of u k squared plus something small. So let's write, so if lambda k plus one is chosen suitably. But I should have written really plus order, any order I want. So now we have a nice iteration. This is less than this times that. And all these numbers are less than one. So in particular, this is a positive number. This is always smaller than this. So I of u k is decreasing. It's decreasing bounded from below by zero. So it's going to converge somewhere. And you check where can it only possibly converges to zero, because that's the only fixed point. So therefore, we have proven that I of u k will go to zero. So now what's left to do is to prove that the u k's themselves converge somewhere. And I'm not going to go through the details, but it's not hard to check that that sequence defined there is Cauchy in L2. It's not so hard. It's really the same argument. I mean, this is really an exercise we can give to a first year student. Prove that that sequence is Cauchy in L2. So it's going to have a limit because I was continuous. If u k converges to u in L2, I of u k must converge to I of u, because it's continuous on L2. But I of u k will just prove converges to zero. So I of u is zero. And that's that. In fact, you can certainly prove that it has a limit precisely because I of u k goes to zero. Well, that's a weak limit. But then you upgrade it. Exactly, exactly. Weak strong. Exactly, exactly. So that's really a toy convex integration technique. And really what I want to emphasize is two things. So the iteration consists always of taking your old object, adding to it a slow amplitude, determined in terms of your old object, times a fast oscillating wave. And two, you need a way of characterizing the fact that in the limit you have a solution. You really need these two things. Now, I had prepared on my computer, if you start with one half sine of 2 pi x, which is obviously an element in x zero, and you choose lambda k's, I know that exponentials work. So you choose them exponentially. You can plug on your computer how u8 looks like. Or u10. And I drew some pictures, but I can't show them now. But let me try to sketch what they look like. They look something like, oh, never exceeding. It's super violent variations followed by little pieces where you're very close to one or negative one. And if you plot on your computer u16, you'll really see a thick, because of the resolution, you'll see something very thick here and all sorts of lines in between. Why am I mentioning this? This method will always give you bounded solutions, but never continuous, almost never. And of course, we discussed there's rigidity in the continuous setting. There's only two solutions. Okay, so this is why this method was really beautiful because it put Euler into this nice geometric context, but somehow really some new ideas were needed to go towards the Onsager conjecture, which is stated on Helder spaces. And this I thought, and I still think, is the fundamental contribution of Camilo and Laszlo, so day again, and I think this is 2011, or 12, 12. This was an Invencionis paper, which really was the breakthrough towards this Onsager conjecture, given E01 into 0 infinity smooth, there exists may be a difference between the many UP in C0 weak solution of 3D Euler with that, let's call it capital E, because it's integrated with E of t. And the emphasis here is on C0. It's not exactly C0, it's some Helder exponent. Let me write C0 plus. They get some Helder exponent, which in their first paper, they didn't bother quantifying, but then they had a follow up paper and they quantified that this number is at least a tenth. Okay, so there is a positive alpha. Now why did this theorem come? Why does it exist? What was the philosophy behind even hoping that this theorem exists? Well, if you remember the story of this convex integration started with Nash, who proved that you have isometric embeddings, which are C1. Now of course, so Nash proved that you can put the standard ball, so you can isometrically map SN minus 1 into the ball of radius epsilon, 0, in Rn, isometrically, and U of C1. And which is very different from saying that U is Lipschitz. Because if it's Lipschitz, you can just fold your manifold. And if you fold it, okay, you have cuts, but you preserve distances along the cuts. Doing it in C1 is fundamentally different. And of course, this is the Nash-Quipper theorem from the 50s. But even more, after the work of Nash-Quipper in the 60s, I think 63, Borisov argued that you can improve this to C1-alpha for some alpha definite. So if you can do C1-alpha here, you should be able to do something for Euler. Why? Because in local charts, being an isometric embedding just means that, let's say, DIU, DJU is your metric. So if you call A to be the differential of your metric, this is really just saying A transpose A is G. And the fact that it's a differential just means that the curl is 0. Linear PDE with nonlinear constraint. It's like Euler. Curl is dual to div. Euler is a div, not a curl, but it's dual. And because this really looks like, if you follow their first paper, looks like maybe Euler does fit in this framework, and in this framework you can do, again, for Borisov in the 60s C1-alpha, it's said that, philosophically speaking, you should be able to do C1-alpha there. What's the catch? In order to go from L infinity, or in this case, Lipschitz, to C0 plus, respectively C1 plus, you need some good building blocks. Plane waves won't do it. Here you had Nash-twist, Kuiper corrugations, the geometers, well, what I mean. What would it be in Euler? What are the building blocks? And the main somehow philosophical idea was that instead of using plane waves, which is what this convex integration is really built for, you should use as building blocks not just signs, but steady Euler flows at high frequency. So building blocks and I will are highly oscillatory Euler steady solutions. And I will discuss exactly what I mean. So my goal for the rest of this lecture is to try to prove this theorem, but I don't want to somehow prescribing the energy is an additional technical thing that I don't want to get into. I really want to get into the construction. So instead I will prove the theorem which says that there exists a non-conservative solution. So I will, instead of doing this, I will prove that there exists a solution whose energy at time 1 is much bigger than the energy at time 0. So I will build the solution such that the integral of u of x1 is more than twice the integral of u of x0. So it's not conservative. And attaining an energy profile is not that much harder, but for the purpose of conserving time we won't do that. So this is the theorem I want to prove. That there exists a C alpha solution. Actually it's going to be called beta. So I will construct you a C beta solution for some positive beta, tiny, whose energy doubles. So that's the goal of today's lecture. And let me start with this before we get to the break. So now I need to erase some boards. The method of proof is really the same. We are going to construct iteration. There is one major difference. Instead of calling the iteration number k, we're going to call it q. So we will solve iteratively this equation. So let me explain. For every q I will find the solution to this Euler Reynolds equation because we do have a Reynolds stress. And what I will want about my Reynolds stress and this little circle on top just means it's traceless. So this is traceless symmetric. So at every step q, q is a natural number, I will solve this equation. I want a way of quantifying the fact that in the limit as q goes to infinity I will get the solution of Euler. So what I will want is that rq and c0 goes to 0 as q goes to infinity. At some rate I will tell you the rate later. So if you do this of course then in the sense of weak solutions this term will go to 0. How about this term? Well you need to know what uq does. And what we will want is that what we're going to call wq plus 1 which is the increment that these also will go to 0 very fast. We'll go to 0 and in fact it's going to be like the square root. And I'll explain why. Throughout today's talk and tomorrow's talk I will use the squiggly less to mean there's some constant but it's independent of q. That's going to be important. And if this goes to 0 sufficiently fast like geometric it means that the sequence is summable and then you have a strong convergence of u so then the non-linear term will also converge strongly and in the limit you will have an Euler solution. So it turns out that in order to make this work you have to be a bit more precise so you have to say, you have to introduce some things. So we have to introduce first of all a sequence of frequencies lambda q which will go to infinity just like the lambda k plus 1 there. But instead of saying that there exists a sequence we could do it like this but instead of saying that let's just fix it from the beginning. And it's convenient to call it a to the b to the q some super exponential sequence where a is much larger than 1 really, really large. B is a bit tiny a bit bigger than 1. Let's say b is 2 for the purpose of brevity. So it's a to the 2 to the q. It's a super geometric sequence. And of course you know Nash Moser, it's somehow related to the fact that sometimes in Nash Moser you want to go super exponential. So this will be my sequences there. By the way you can do it with geometric. It's just going to be a lot harder to write down. And then I need to quantify things going to 0. So what's the best way of quantifying, so this is going to be a frequency by the way. And by frequency I mean a Fourier frequency in space, not in time. So I need an amplitude parameter, an amplitude that goes to 0 and that's being called delta. And we're going to choose it to be an inverse power where beta is more than 0. This is an amplitude. So by this I mean that part of my induction will be that the norm of rq in C0 is going to be less than some tiny constant universal times delta q plus 1. So this is how I'm going to quantify this. So this is just a very precise way of quantifying the fact that it goes to 0. And I will require also some kind of bounds on the sequence uq because otherwise it's going to be very hard to keep track of. So I will want at least that my uqs are uniformly bounded in C0. At least that. So let's for convenience write 1. Let's do that because then each step I give myself a bit of room and I will ensure that I'm less than 1. And the other thing I want, remember if you unfold this derivative you get uq dot grad uq. You have a transport equation. Transport equations behave beautifully as long as the drift is Lipschitz. And in particular it would be good to keep track of the Lipschitz norm. So what we will do is we will say that uq in C1 in space and time is going to be less than what? Well a derivative should cost lambda q. And the amplitude should be somehow square root of this. Okay so these are part of the inductive things. So the lemma says given uq rq, remember the pressure is always computed from the elliptic equation. It's 0 me. Given uq rq with these three bounds that exists uq plus 1 rq plus 1 which obeys the equation with here replaced q by q plus 1. With these bounds replace q by q plus 1. And there's one more caveat. So the inductive step additionally requires that when I went from uq to uq plus 1 which is exactly this increment that I have respected this inequality. So uq plus 1 minus uq in C0 should be square root of this or less than. Okay so our goal for the next hour will be to prove this inductive step. Given uq we'll find uq plus 1 and our constraint in some sense is that we don't do this with much effort. If you measure effort in C0 it shouldn't take more than that much effort. Okay so I think now it's time for a quick break. Okay so let's start again. First I should correct something that Anlor and Patrick told me at the break. So there was a blunder. I said that the space for i of u is 0 was dense in x0 with respect to the L2 topology and that's of course nonsense. The density is with respect to a weak topology such as h minus s for s positive and let's just quickly see this because it's not so hard. If you think about u1 minus u0 this is just 1 half 1 minus u0 squared times sine of lambda 1x. So in what sense is this small even the first iterate? Of course if u0 is 0 which is allowable then this is 1 half of sine in what sense is a half of sine small and the half of sine is small in a negative order space. So this is small in h minus s if lambda 1 is large and this is exactly the type of density that you can prove. So I will blame this on the jet lag. And I will also erase what I just wrote which one should never do. Okay let's return to what we wrote on the board there. So our goal is to prove this inductive proposition. So let's start by saying once you've done this inductive proposition you can prove that you can prove that you can prove the proposition why does the theorem follow? So the inductive proposition says that if you give me a uq I'll give you uq plus 1 so you need to start somewhere and let's start by defining some u0. Okay so keep in mind that the parameter lambda defined there if you look at lambda 0 which is a to the 2 to the 0 it's very large. If you look at lambda 0 which is large. So if I choose my u0 to be something oscillating at frequency lambda 0 then I should maybe choose a shear flow for convenience let's put a half. So this shear flow is a stationary solution of Euler. If I choose this as my starting point it's not so obvious that I would be able to double my energy. So instead let's multiply it by t and t is between 0 and 1. Then at time 0 this object has 0 energy because it's the 0 function and at time 1 it has L2 norm order 1 right it's actually a half. So this u0 let's write here energy the kinetic energy of u0 just does this. Sorry? What is the square? Yes of course you mean I shouldn't draw straight lines. So this is the kinetic energy of u0. Of course because I have put this t there this is not a stationary solution of Euler anymore. So what equation does u0 obey? This term is 0 right because it's a shear flow. In fact shear flows don't even have pressure so I can take 0 pressure. The only error I made by randomly choosing this function is that the time derivative can land there. So how do I write this vector which is a divergence of a traceless matrix? That's explicit in fact. So where r0 up to constants let me not write up to constants. Let me write actually what it is and then it's traceless symmetric. So you can just check that if you take the divergence of that you get exactly dt of u0. So there's a balance between these two. Now in order to satisfy my induction scheme okay so my u0 in C0 this is less than 1 this is less than 1. So okay I subtracted something there so if I want to be a bit more careful maybe I should divide this by 2 or something and then you should put a 2 there. Now it's going to be okay. Is my r0 in C0 small? Well r0 in C0 is less than this and the question is is that less than some universal constant times delta 1? Who is delta 1? Well delta 1 is lambda 1 to the minus 2 beta so is this true? And this is true if basically lambda 0 times lambda 1 to the minus 2 beta can be made large. What is this actually equal to? This is lambda 0 to the 1 minus 4 beta because remember lambda q plus 1 is nothing but lambda q plus 1 is nothing but lambda q to the power b and b is 2. So if beta is tiny in particular less than a quarter then I can of course make this large and then you can check to see one norm it also is correct and it's correct because I've put this one half there. I have a dummy question about the notation. Sometimes you're putting a dot above the r and sometimes not. It's always there's a dot. Okay there's always a dot. The dot only means traceless. Nothing else. It doesn't do anything else. Okay so with this starting point I can now apply my iterative lemma assuming I can prove it and if I can prove it I have a sequence of uq's. And the point is that u minus u is 0 is the sum of what I have called w which is the sum of all increments. So I can test how small this is in c0 and this is the same of so my inductive lemma tells me that wq plus 1 is less than delta q plus 1 to the one half so this is delta q to the one half which is nothing but using my notation lambda q to the minus beta and no matter how tiny beta is this will converge geometrically. And this is of size lambda 0 to the minus beta roughly speaking. Lambda 0 remember is a huge number. It's a and a we've chosen large. So whatever the limiting profile is in the c0 topology it's going to be very very close to u0. So if u0 has this kinetic energy then my limiting solution whatever it will be will have kinetic energy in a tiny tube around this kinetic energy. Tube of size roughly speaking lambda 0 to the minus beta. So there's a tube here and the kinetic energy I will get at time 1 will be somewhere there but at time 0 it's at most that much. So this is lambda 0 to the minus beta and this is roughly speaking 1 plus and this is why the energy at least doubles because it starts tiny and it ends in the neighborhood of 1. So the only thing that's left is to convince you that you do get some regularity in order to check that you get some regularity let's estimate the c alpha norm of this. Every derivative will cost me a frequency. So the c alpha norm essentially will just cost me a lambda q to the alpha because I will ensure as part of my construction it's not obvious from what is written but I will ensure as part of my construction that or you can do interpolation if you want. But what am I writing? I should write w's. That's true by interpolation w is this is less than gradient of u q plus 1 minus gradient of u q. Gradient of u q plus 1 costs more. So this is going to be less than lambda q plus 1 delta q plus 1 to the 1 half. So in total this will be less than sum over q positive. This in c0 is tiny it's delta q plus 1 to the 1 half according to my inductive scheme. So it's a half of 1 minus alpha whereas this costs lambda q plus 1 delta q plus 1 to the 1 half to the power alpha. So just interpolation. And now you remember what these things mean. So deltas are nothing but lambdas to a power. So it's lambda q plus 1 to the minus beta times 1 minus alpha plus alpha times 1 minus beta. It's just algebra which is probably done wrong. So the question is when is this exponent negative? And you will discover that what you need is less than alpha is less than beta. So that means our solution will strongly converge in c alpha for any alpha less than beta. So if I can prove my inductive proposition with some positive beta I will get some kind of herder continuous solution. So what's really what's left is now to prove the inductive proposition which is the hard part of course. So let's try to see what happens. So I start from u q r q when I want to go to u q plus 1 r q plus 1. They both obey an equation the Euler Reynolds equation. So what you can do is you can subtract the Euler Reynolds equations at level q plus 1 from level q. And what you will discover is of course that this is an equation for the increment plus some pressure. I don't want to bother with keeping track of the pressure because it's always determined from the elliptic equation. So I think I've done a sign wrong because you subtract. So this becomes a plus. The first line is nothing but the linearization of Euler around the old solution u q. The second line is the new nonlinear term. Then you have the old stress. So you see that the new stress which is your goal you want to make this object to be much smaller than this object but you are given a budget for this w. So in some sense it's a minimization problem. Minimize the C0 norm of this subject to this PDE and an amount of effort. Of course this minimization problem is not the kind you can actually solve using variational principles. So you kind of have to design it ad hoc. Now let's try to see what happens with this minimization problem if we ignore the transport term. So this is going to be called the transport term. This is going to be called the Nash term for reasons the fact that in the isometric embedding problem there is exactly such a term or at least in spirit. And this term contains nonlinear functions of highly oscillatory things so we're going to call it the oscillation term. So let's start by discussing the most important apparently term the oscillation term because that's what contains RQ. So you want to correct something which is low frequency with some product of high frequency functions. And of course this is exactly doable when these high frequencies maybe cancel each other you have a resonant interaction and they spit out a low frequency. So if there's a high high to low cascade you can achieve this. And normally you do this with you know plane waves right. So if let's write WQ plus 1 I'll try to make an ansatz for how WQ plus 1 looks like. And because this is not exactly going to be WQ plus 1 but somehow more the principal part of it let's call an upper P there just to emphasize that this is the principal part. Okay so before I can exactly write down I started by writing it but I'm going to erase it now. Before I can do this actually you discover one interesting thing. In order to compute RQ plus 1 you need to know gradients of this and gradients of that. But then if you iteratively back you need to know second gradients and you go back you need to know third gradients. It looks like there's a derivative loss problem. And to handle this issue somehow the cheapest way is to mollify the equation. So it turns out that instead of doing this what we should have done and this is you discover up posterior this is why I just can do it from the beginning is that we should have mollified that equation. Okay so that's an equation and because I didn't say explicitly well UQ is frequency localized at lambda Q I never said this. I could have said it then it's an additional inductive constraint but I never said it so instead let me just mollify. And we've discussed with the Reynolds stress exactly what happens when you mollify. So let L be a small length scale. I will tell you what it is. Essentially what's really important is that it's a bit smaller than lambda Q and much larger than lambda Q plus 1. Inverse of course. These are Fourier frequencies so inverse powers are lengths. And in terms of this parameter L which I'm going to choose it turns out you can just choose the geometric mean. You choose it straight in the middle and then L is nothing but lambda Q to the minus three halves because remember that's exactly the square. So the geometric mean is exactly that. It turns out that if you mollify now so you call U sub L the mollified thing you mollify in space and time. And you call R L the mollified stress. Then the equation the Euler Reynolds equation simply becomes this but as we have discussed there's an additional term. And this additional term comes from the fact that modification did not commute with products. So let's write R commutator. And this commutator error comes from the fact that I had this mollified and I subtract from it and of course the sign is off. How big is this commutator error? Well let's estimate the C0 norm. So the silly bound you can do you can do fancy bounds or silly bounds. The silly bound is just C0 norm that's the silly estimate. This is bounded by one and this is bounded by L times the gradient. L times the gradient. I will use throughout my talk that with that choice of L this number is tiny. It's much much much less than one. And in particular what we would really like it to be is less than delta Q plus two. Because then that new stress is consistent with the estimate we want at level Q plus two. So let's check. So this is exactly equal to lambda Q to the minus three halves plus one. It's minus a half minus beta. And the question is is this much less than delta Q plus two which is lambda Q plus two to the minus two beta. But here is Q there is Q plus two every time because I've chosen B is equal to two U square. So this is the same as eight. And of course this means that you need nine beta to be less than a half. So okay make beta less than whatever one over 20. So this is okay. So it means that this stress that I got here I don't even have to touch. I can already just leave it there. I don't have to touch it. Okay. Now what I'm going to do is I'm not going to subtract the equation obeyed by UQ plus one from the equation obeyed by UQ but from this one. Okay. And I will call WQ plus one to be UQ plus one minus UL not UQ. So then what happens is that this changes a little bit but not much. This becomes UL. This becomes UL. This becomes RL. And there's this additional term. I don't know plus or minus the commutator stress. That's all that changes. So this commutator stress we've already bounded it. It's much less than delta Q plus two. So we don't even have to do anything about it. It's done. So our game now is not to correct RQ but to construct RL. What's the advantage of doing this? Now you know exactly how much every derivative costs. A derivative hitting UL will cost at most L inverse. A derivative hitting RL will cost at most L inverse for all of them. And L inverse is very small compared to lambda Q plus one. Remember this was the geometric mean of lambda Q and lambda Q plus one. So you've moved everything to an intermediate frequency which is still much less than the next frequency. So we can still talk of high low. Okay. So now comes a little bit of unpleasant things because we have to do linear algebra. Sorry. L is tiny. L is a length scale. L inverse is like a frequency. It is very small compared to the next guy. Okay. So let's discuss about the backwards cascade. And to discuss the backwards cascade we need to discuss linear algebra. You want to correct matrix which is traceless symmetric with the high high to low cascade coming from some kind of quadratic form. So turns out that a couple things are true. Number one, lemma. Okay. So this is a little linear algebra lemma. There exist many sets. For what I'm writing here I need two of them. Okay. Of rational points on the sphere. Each set has cardinality six. So there are two sets lambda zero and lambda one. Each of one of them has six elements which are six elements and they're opposites. So 12 elements with the property that K in lambda alpha implies minus K in lambda alpha. So every point contains the negative one. And there exist smooth functions defined in the ball around the identity in the space of symmetric matrices. So you take an epsilon ball around the identity in the space of symmetric matrices and I'm claiming that there exist C infinity smooth functions from this to R such that there exist sets and there exist smooth functions such that for any R in there let me not call it R. Let me call it M for matrix. We have that M is one half sum of gamma K of M times identity. What does this lemma say? I have a basis of symmetric matrices given by identity minus K tensor K. So K is some vector on the unit sphere. This is some symmetric matrix. What this is saying is that this set of matrices forms a basis for the space of symmetric matrices around the identity and moreover this basis you can do it with very smooth coefficients as long as they remain close to the identity. And you can easily believe this because it's just implicit function theorem. Or even better you can actually write them down explicitly. I can actually write Pythagorean triples on the sphere and I can just span matrices. Then I can just rotate these triples with any kind of rotational, rational rotation and I get another triple or not triple, six tuple, okay? And so on. So this is little linear algebra. So this gives us a hint that somehow if this vector tensor, this vector, at least the low frequency part of it looks like that then maybe we're going to be able to span any matrix in the neighborhood with the identity. Okay, the next lemma, so this is just linear algebra. The next lemma is about, well let me not write lemma because I don't want to put quantifiers. It's about stationary solutions of Euler. And in particular about Beltrami fields. Keep in mind an eigenfunction of curl, which is a Beltrami field, is a stationary solution of Euler. Okay, with pressure given by exactly the absolute value of the Beltrami field squared over two. Okay, so let me construct some complex Beltrami fields. Well actually I'll make them real because we do work with real numbers. So given k in there, this is a point on the unit sphere with rational entries. I'm going to choose ak also in there such that it's orthogonal to k. And I'm going to choose a minus k to equal ak. So ak is just a vector in this plane. So then k cross ak gives you another thing in this plane and they form an orthonormal basis if you're careful. So then you can define a complex vector bk to be one over root two ak plus ik cross. What's the property of bk? I've put the root two there such that bk is length one. Second, bk times k dot product is zero because this is orthogonal to k and of course the cross product is orthogonal to k. But more importantly what happens when we take ik cross bk? That's why we've done these complex beauties there. The minus i times i will become a minus i but k cross with that will give you back minus ak. So it turns out that this is exactly. And lastly when you conjugate, sorry when you go to minus k, a minus k is the same as this, this is the same as that. But when you go to minus k this becomes a minus. So b minus k is bk conjugate. So far I've just defined some crazy vectors, complex vectors. And the reason you define these complex vectors is that you can define now an eigenfunction of the curl operator which is divergence free. That's a vector and I'm going to multiply it by e to the i lambda q plus one k dot x. So far a definition does everybody believe me that that's an incompressible vector field? Of course because k dot bk is zero. Does everybody believe me that curl picks up exactly lambda q plus one? So this is an incompressible Beltrami field. And because no matter what k is, right, we've normalized them to have unit length. No matter what k is, this will be the same eigenvalue lambda q plus one. So if I do linear combinations of these I will also get another eigenvalue with the same eigenfunction. So this is the lemma somehow that linear combinations of these wk's still give you a steady solution of Euler. So every single one is a steady solution of Euler because it's an eigenfunction of curl. But even linear combinations and this is somehow really the punchline. So the punchline is that if you give me complex numbers a k such that, okay, if you give me some numbers and I define that, so these are just numbers. You have a question? My question is about the linear combination. It's because they have the same eigenvalue? Yes. And this is what I'm writing now. So this is a linear combination. You need to do this so that this is a real number. And this is an eigenfunction of curl. And it's divergence free. And in particular the divergence, let's call this thing bold w because it's an eigenfunction of curl the divergence of bold w tensor bold w is a pressure gradient. That's why it's a stationary solution of Euler. Now I've written a lemma about spanning matrices and now I've written some other lemma about eigenfunctions of curl. How are they related? They are related through the fact that the low frequency part of this object we can now compute. So what's the low frequency part of this object? This object is the sum of oscillatory waves. The low frequency part is exactly when two opposite waves interact. So let's write this. The low frequency of a function is the mean dx is exactly the sum of ak squared wk tensor w minus k because that's the only way you're going to get zero modes. So let's symmetrize this. So this is twice the real part because this is the conjugate. So let's twice the real part of that thing and because we were careful this is why we've gone through this trouble. You can actually compute what the real part of this bk because by the way I have not emphasized but when these talk the phases just cancel. This is the same as bk and guess what? That's exactly identity minus k tensor k. So that's how they are designed. Should I write this? Yes, I guess I should. So now you see what's happening. The way you've created the high high to low cascade is you've put in this Beltrami flows. To leading order they will solve Euler so when all the derivatives will land on the highly oscillatory guy will get Euler but somehow the mean of it is useful in cancelling stresses. So now we can actually try to cook. Let's make a first attempt at designing the perturbation. So the principle part of the perturbation and I'm warning you this will be the first attempt. It will be the sum over k in one of these sets of some smooth functions. They're going to be smooth functions in terms of the old stress. I will explain a little bit what those are but if you remember our old conversion integration construction these are the slow amplitudes and now we're putting the fast waves and the only thing I'm going to require from these functions is that a minus k is the same as ak such that I get a real valued thing and yet. Okay, so let's just compute in the oscillation term the low frequency part of the error and this is RL, sorry, we've modified. So let's compute the divergence of WQ plus one principle part tensor WQ plus one principle part. Okay, this has two pieces in it probably write this okay sum over k and k prime. Okay, these gradients can land either on the amplitudes or on the fast guys. So let's write first this down. So this is a vector contracted with this. So I should have written this and then this dot grad that what's next the divergence can land there but let's separate k not equal k prime not equal to minus k because that's going to be very different for us and the reason that that's very different is that if k prime is not equal to negative k for sure this is high frequency it does not have a low frequency component. I said that we're going to write the oscillation error but I forgot to write plus RL sorry and then plus k is equal to k prime. So how should we handle this k is equal to k prime. It has two pieces really one was already included here when the divergence lands on the amplitudes. So now what's left is the divergence hitting the high frequency guys but maybe when k prime is minus k there is no high frequency right. So this guy we somehow have to treat separately. So let me write plus and let's focus first on the low frequency part of this contribution what's the low frequency part and the low frequency part is exactly what we discussed but without applying the divergence I emphasize. Now k prime is minus k so let me just write k squared and again because I want to be pedantic let's symmetrize. Okay am I missing anything from the low frequency part? Yes plus RL. What should I design the a's to equal such that this is zero? What is RL? It's a small matrix right? It's not in the neighborhood of the identity. Okay so what we're going to get is not equals zero but equals a constant multiple of the identity. This is actually what we're going to do. So I will tell you what ak to choose in order to ensure this because look at this matrix M which is identity minus RL divided by delta q plus 1. So what I have done is I've put this on the other side and I've divided by delta q plus 1. RL look at the inductive assumption. Okay it's modified RQ so it has the same norm. There was a tiny parameter CR in my inductive assumption which are now gone. That was to say that this is not just an order one matrix but it's a small matrix. So this M in C0 lies actually in small neighborhood of the identity. So I can apply the linear algebra lemma to it because as we have discussed this is exactly equal to identity minus k10. So what should I choose ak? You choose them to be that. You see the gamma function squared will give you back your matrix. So it's going to give me back this matrix times the square of that. This times this will give me this and this squared times that will give me this. This is now beautiful. We have an explicit definition of the amplitudes and you see it's explicit. You get for free that the amplitudes are of this size because these functions I can normalize them to be of size unity. So now you see that automatically the size of this in C0 is the size of this in C0 plus the size of this in C0. An exponential have size 1. The Peltrami fields are normalized to be order 1. So the size of this guy in C0 is automatically delta q plus 1 to the 1 half times as a constant so I can put some tiny constants all over the place which I'm not going to do. So this already says that this has the correct budget. What's left? So when we took the product we treated separately the divergence hitting the amplitudes. We've treated all the cases. So then we have the sum k and k prime which we split into 2. Sum k prime not equal to minus k then we let the divergence hit and then what we had left is the sum when k prime is equal to minus k but that's been dealt with already. But I couldn't have said it from the beginning because it's really not obvious. It's not an obvious computation. Okay, how small is this guy? You have a question? I thought that the ak's were scalar. They are scalar functions. Because here in the right-hand side you have the identity minus r and k. So this is a scalar function. These are scalar coefficients of these matrices. Okay, where does it come in that you have chosen Beltrami fields? Let's do the same. Let's symmetrize. And I want to emphasize you cannot let the divergence hit this guy because k and k prime don't sum to zero. They sum to something bounded from below by a half let's say. So when you multiply it by lambda q plus 1 the phase k and k prime don't sum to zero. So this object is high frequency. This has frequency lambda q plus 1. So when the divergence will hit it you will pay a lambda q plus 1. I say one thing and write the other. Okay, you can say well but Vlad anyway you need to not compute the divergence of this. You need that guy. So what you will really need to do is to apply at the end inverse divergence. So then you can say well you don't really lose this frequency because you're going to gain it back when you apply inverse divergence. But okay, you gain it back but are you smaller? These objects have size delta q plus 1 to the one-half. So the product is of size delta q plus 1. You have not become smaller. You haven't gone to delta q plus 2. So this is not acceptable essentially. And this is where the precise structure of Euler steady flows come in. This happens to equal and this is the key identity I would say for Beltrami fields. Well Beltrami fields are designed such that the divergence of their tensor products is a pressure gradient. So you should not be surprised when I write that this is equal to a gradient. This is a little computation but it's explicit. You just do it. So we would have a gradient. That means that this is actually a pressure term. So what we will do with this term is rewrite it as one-half gradient of some k prime equal to minus k. Thank you. So that's when you take the gradient outside. That's a pressure gradient. You hide this in the pressure and then minus and then you have the gradient. So modular pressure gradient term, the entire contribution of the oscillation error is of the type product of highly oscillatory things, product of highly oscillatory things and then gradients on the amplitudes. This has an issue. Does anybody want to comment? You see when we've done this computation the entire contribution from k prime is equal to minus k is here. That's very important because this is high frequency. What is the frequency of this? What does that mean? How much does a gradient cost? The gradient will hit RL. It's going to cost an L inverse. It has frequency L inverse. So I mean this loosely speaking. Same thing here. Frequency L inverse frequency. So when you apply the inverse divergence to such a term, to a term which has gradients of and some kind of thing here, well let's write it. Why am I lazy? You have inverse divergence of slow times fast. What do I mean by inverse divergence? Divergence has a huge kernel. Any curl is in the kernel. But it turns out that on the torus any function of zero mean is the divergence of somebody. So any function of zero mean is the divergence of the gradient of inverse Laplacian. So it is divergence of somebody. Of course what I have written there doesn't spit out. So what I will mean in this case by inverse divergence is this. This is not a traceless symmetric matrix. You can define this type of elliptic operator to spit out traceless symmetric matrices. You symmetrize, you add gradient transpose inverse Laplacian. When you hit it with divergence this will die because F will be incompressible and then you subtract the trace. So there exists elliptic div inverse operator which is this times the matrix of Calderon-Sigmund operator. So just re-transform. So that means that when you apply inverse divergence to this product I need this space. I'm going to erase that. What is it going to be bounded by in C0? Okay. Negative one order operators should gain you a frequency. This is slow. This is fast. By the triangle inequality their product is fast. So the frequency it will gain you. What will you pay? Well you will pay the amplitude. This has amplitude one. This has amplitude L inverse times delta Q plus one. Okay. This is a bit idealized. Calderon-Sigmund operators are not bounded on C0. Okay. I'll write it so small that you can't even see it because it's irrelevant. It really does not matter. Okay. So you can prove this estimate. It's really easy. You can just do it. Question is this good enough? Is this less than delta Q plus two? So let's check. And the answer is yes it is because this is equal to... So L inverse is lambda Q plus one to the three quarters. Minus one is minus a quarter. Minus two beta. And is this much less than this? Which is lambda Q plus one to the minus four beta. I keep on using the B's two. And the answer is well of course yes if beta is small. It turns out that this computation is exactly the same computation that is true for the Nash error. We don't need linear algebra anymore. You see the Nash error is again something fast times the gradient of something slow. It has exactly the same thing when you imply inverse divergence. You get that. If you are being very careful, and you don't choose these parameters the way I've chosen them, you can very quickly see that from the Nash error the helder exponent one-third comes out. So I just want to do this computation very quickly. So let this be A to the B to the Q. And let's say I was a bit more careful and I didn't choose B is equal to two. But instead B goes to one. It doesn't work for what was done there. But let's say I could have done it. Let's say I only had the Nash error. Then if we go back here essentially a gradient of the slow guy should really cost me lambda Q. Why? Because although I've mollified I was propagating a gradient bound. Remember it was part of the inductive assumptions. I don't have to pay for the first gradient. I don't have to pay L inverse. I can pay what I've propagated. Which is that, right? Then inverse divergence will gain me one over the frequency. And I still have the amplitude of WQ plus one. So what am I writing here? I'm writing inverse divergence. This is what I'm writing. I'm writing the C0 norm of this. So question is this much less than this? So let's write this down. Just because I think it's nice to see. So this will be lambda I'll write everything in terms of base lambda Q. So let me just write the exponent. So I have exponent one minus beta minus beta times B because remember to go from Q to Q plus one it's multiplied by B. Minus B and then I'm putting this on the other side. Plus two beta B squared. And I would like this to be negative. This is the exponent of lambda Q in this expression. If this is negative I'm done. Well let's do the algebra. So I will regret doing this in public. Let's send B to one. What are you getting then? One minus beta minus another beta. I've screwed up the sign of something. Where is the thing which is wrong? Okay. So let's actually do it. This is the punch line. So let's actually do it. So first of all is this correct? This exponent is one. This exponent is minus beta, that's correct. This is minus beta B, that's correct. This is minus B, that's correct. So let's try to factor a B minus one out of this. Is B is equal to one a root of this? Yes, it's a root. So that means we can factor B minus one. And now we want to get the number there which is going to be negative. So why don't you write B equals one plus epsilon? So it's minus one. Let's do this. So I have a negative one from here. Then I have these three terms which are beta times one plus B minus two B squared which is minus beta times... So what are the roots? One and negative two? No. Why don't you take the derivative of this B equals one? Yes. It's four beta minus beta equals three beta. Yes, if you take the derivative of this at B equals one you will get exactly the formula. So this is true as B goes to one. It's actually three beta B I think. So this is when B goes to one, it's actually that. So if this number is negative, we are done. And you see that the negativity of this number exactly implies that beta is strictly less than one third. If there would be only the Nash term and if you could send B to one then you could actually prove the Onsager conjecture pretty easily. But it turns out that this is kind of hard to do. Now in the proof I have given you what is the only thing I haven't talked about and I'm getting close to being done I've completely ignored the transport term which is the main term in some sense. I've pretended that the transport term wasn't there. And actually as it turns out it's the main term. So I will just in five minutes tell you what to do with that and then I'm going to stop. And of course during Tristan's lectures he will talk much more about the importance of the transport term and how it interacts with the oscillation term. But how would you could possibly do this? How could you possibly deal with this term? Well, if Wq plus one is the sum over k's of something small, a slow times something fast to leading order this transport derivative hits the something fast. To leading order it hits that. So the thing we need to do, it seems natural, is we need to transport these Beltrami fields by the flow introduced by ul. So you need to solve dt phi plus ul dot grad phi is zero, phi at some time is the identity. So there are some times ti from which you can solve this transport equation. Of course when you solve the transport equation and you're going to put by the way this is the principle part, you're going to compose with these flows does this still look like a Beltrami field? Is this still an eigenfunction of curl? And the answer is well no, because you've twisted the Beltrami field. But to leading order it is as long as this matrix, sorry as long as this flow map doesn't deform too much. So if the gradient of phi i minus t is tiny, this will still look like a Beltrami field. But you really need this. But for how long is this going to be true? This is going to be true for a time interval which has length one over the Lipschitz norm. So these ti's will obey this and this I had a bound on. So I know what to do for every one of these time steps. But how do I put these together? And the answer is with the partition of unity in time. So you have a partition of unity in time and then you're going to multiply this by chi i of t. The square forms the partitions because I want the tensor product to cancel. Okay so now once I've told you this you have to repeat everything I've said, which we're not going to do with the plane wave, the plane Beltrami field, replaced by this. But the properties of e to the i lambda q plus 1 k dot phi of x and the properties of this kind of similar as long as this doesn't depart much from the identity. If this is true, these things will have very similar properties. So instead of inverting the elliptic operator on an exponential you do the non-stationary phase lemma. Everything works. I think I've run over time and I'm sweeping under the rug a really important issue which was this transport term. But essentially this is what you do. There's nothing really else you could have done but this. Because if you don't make your highest frequency object follow the transport equation, this will just completely destroy everything. And now you believe me that if you follow the transport equation for a short enough time and by the way this is why you had to propagate Lipschitz norms. This is the only reason you had to propagate from the beginning the Lipschitz norm. Everything will work. Everything I've said will work. Now it's very difficult to go from what I've shown you to one third regularity. Tristan will discuss this matter. I will stop talking about Euler and tomorrow I will just go from with Navier's talks. You'll see a lot of similarities. I'll stop for now. Thank you very much.