 Thank you very much for having me over. It's a real pleasure. So I am supposed to give three lectures and Tristan will give three more in a couple of weeks. So of course having so much time we've decided to spread the topic quite a bit. And today I will mostly talk about motivation and then maybe a little bit at the end talk about proto-convex integration type of methods. So today it's gonna be a lot of motivation and the motivation comes from hydrodynamic turbulence so I'll review a little bit of the classical Kolmogorov picture, the Onsager picture and then I will get to some analysis questions. So okay so we start with incompressible so 3D incompressible Navier-Stokes. So we have an incompressible fluid and it's homogeneous so we take the density to be a constant and if you take it to be a constant it stays a constant so we might as well not write it. So then conservation of momentum like this so you have here this upper sub-new reflects the fact that there will be a kinematic viscosity new so this is the acceleration as experienced by a particle moving with the fluid and conservation of momentum says that this should equal the forces per unit mass and there's three types of forces per unit mass. There's a force which is coming in a form of a gradient which is maintaining the fluid to be incompressible. There's a force due to friction between molecules and in this model the model of the friction between adjacent molecules is given by the Laplacian right so this is modeling so this is a Newtonian fluid with linear dependence of the stress tensor on the deformation matrix and you could have other forces acting on the fluid either because you've externally forcing the fluid or because it's coupled to a different equation and incompressibility services conservation of momentum and conservation of mass simply says that the divergence of u nu is zero again I'm not writing density anywhere okay and this parameter nu is the kinematic viscosity and when nu is equal to zero the same equation written here are the 3d order equations and of course I've written 3d here in what what is written on the board dimension doesn't matter but for this talk I will mostly be concerned with three dimensions for me the spatial variable will be in a three-dimensional torus with periodic boundary conditions it's not that important actually for anything I'm talking about but I think it's simplest to fix a setting and stick to it and since we're working on the torus we can just remove the mean of the flow so if the initial data has zero mean then the solution and the force has zero mean then the solution has zero mean so we'll work with that the pressure I will not say much about the pressure and the reason is that in this setting it's always given to be a unique object so if you normalize the pressure again you have to normalize it because you see a gradient here so the pressure is actually just defined up to a function of time so you take the pressure to have zero mean as well on the torus and if you take the pressure to be zero mean you take the divergence of this equation if the forcing is divergence free this term drops but even if it's not you always have minus Laplacian of the pressure is equal to the divergence of u nu dot grad u nu minus the divergence of the force and if you normalize the force to have zero mean the pressure sorry this is a unique definition for the pressure and for whatever I'm going to talk about this will be always the pressure and in particular notice that if you're using compressibility you can rewrite this term as divergence of the divergence of this tensor so you have minus Laplacian is divergence divergence so the pressure has the same estimates morally speaking as u squared okay so this is just some basic things about the equation what am I forgetting to say well something about units so we should maybe for the purpose of this discussion fix some units so you may fix a unit u which is a typical typical velocity unit so what does that mean you fix some unit maybe it's the mean l2 norm of your data maybe that could be a definition this has the units because we've so this symbol will always mean divide by the measure so this has the units of u a velocity squared into the one half that defines a unit and l will be a typical length so in this case it could be the size of the box maybe and my box is 0 2 2 pi squared so so l will be 2 pi so then you can look at the equation and you will notice that every single term has the same units so for instance if you take a time derivative you know you divide by time which is defined by quotient of these two so you see that every term in the momentum equation has units let's see u cubed over L u squared L inverse wait u squared or u cubed the kinematic viscosity also has some units the units are L times u and this will matter so in particular it is common to define the Reynolds number as the quotient of u L divided by nu and this is a dimensionless number and what we will be mostly interested about in this discussion is what happens when the Reynolds number goes to infinity which is the same ascending viscosity to 0 while keeping you and L fixed so let's just some basics about units how about a priori estimates there's only one known coercive a priori estimate for the 3d Navier-Stokes equation you see that by just taking the dot product of the Navier-Stokes equation with you itself so just to take the dot product and then you'll see that you have dt of the length of a vector squared for the nonlinear term when you take the dot product again you can take the divergence outside and you'll see plus the divergence of u nu u nu squared over 2 for the pressure when you take the gradient of the pressure dotted with u nu that the same as divergence of u nu times b so here you see the Bernoulli pressure with the Laplacian when you take the dot product there is a term which is minus nu gradient of u nu squared over 2 and there's a term which is not in divergence form and then there's the force so this if you have a smooth solution is an identity which you can now integrate and because you're an torus with zero mean the divergent term drops out and you have d by dt of the kinetic energy is given by dissipation rate and then the work of force and this is the only known a priori estimate for 3d Navier-Stokes which is coercive it controls the l2 norm in space uniformly in time and it controls the gradient of u in l2 l2 in time so if you have a bound on the force what bound would you need on the force you see if this force is a gradient of an l2 function then you can move the gradient on the velocity look or she Schwartz so if the force is in l2 in time h dot negative 1 in space remember everybody has zero mean so negative order sub all of spaces on the torus everybody knows how to define then u nu belongs to l infinity in time with values in l2 and moreover you have an l2 in time bound for the gradient okay so this is the a priori estimate and based on this a priori estimate which is true for any smooth solution so in particular if you do an approximation of Navier-Stokes like a Galerkin truncation or any kind of approximation this is going to be true uniformly in your approximation scheme and this is what LeRae did and LeRae uses a priori estimate and this integrated in time of course if you have an approximation scheme this bound is going to be uniform in your scheme but then when you pass to the limit this equality because of lower semi-continuity will become an inequality so you will have what is called the energy inequality as opposed to equality okay so for any initial data in l2 let's take zero mean there exists u nu in the energy and force l2 in time h minus 1 in space there exists u nu in the energy class a weak solution of 3 d Navier-Stokes which obeys the energy inequality so let's call this the energy such that the energy at time t plus nu times the energy dissipation rate integrated from 0 to t is lesser equal the energy at time 0 plus whatever the force inputs and of course LeRae's solution are a bit better than this this l infinity in time is actually weakly continuous in time so when you test against a fixed smooth function which is independent of time then the inner product of u nu and that test function is actually continuous in time and here you see there's two times t and 0 and they appear here and LeRae's solution of course work with 0 and t replaced by tau and t for almost every tau positive and for all t bigger than tau so they're a bit this is actually a stronger information and I haven't first defined the concept of a weak solution but let's just say for the moment I guess I should write this down so definition u is a weak solution of Navier-Stokes or Euler so let's require at least some regularity in time let's say weakly continuous because then we can make sense of initial data if u belongs there for almost every time u is weakly incompressible okay so that means that the divergence of u is 0 in the sense of distributions and the equation holds in the sense of distributions so the integral of u against any test function minus the integral of the initial data I guess the test function at time 0 is equal to the integral 0 to time the integral over space of the test of u sorry dotted with dt of the test function if you have Navier-Stokes I need to write the Laplacian if you have Euler you don't have this term and that's the weak form of the equation with initial data u0 so this should be true for all phi which are c0 infinity in time with values in c infinity of the torus which is incompressible because I have not written the pressure term there so I need to take incompressible test functions otherwise I'm gonna write the pressure term okay so this is the notion of a weak or distributional solution and notice it's the same definition for Euler in Navier-Stokes and what you really need is actually just L2 integrability in actually both space and time to give meaning to this and this is in some sense the weakest notion of solution that you can have the Ray's solution is a stronger notion than this why well first of all because it gives you this extra regularity but much more importantly because of the energy inequality and why am I saying this you can ask yourself for the Ray's solution is the energy inequality for a solution in the energy class can you prove the energy inequality and the answer is well to date that's open actually so open does do weak solutions in energy class automatically obey the energy inequality it's really not known there are conditional results such as yes if you tell me more such as you being L4 this is an old result of shin brought from 74 I would like to say that being L4 is not the same as being smooth if I give you a Ray's solution and I tell you in addition it's in L4 you cannot prove it smooth it's somehow below scaling so this is a super critical assumption would imply actually the energy inequality but not regularity a priori which is a non-trivial fact of course the big open problem is the uniqueness of the Ray's solutions for F in L2 in time h negative one a smaller open problem was the existence of just weak solutions the uniqueness of just weak solutions and as I will mention with Tristan we proved that this type of weak solutions are actually not unique but our theorem does not apply to the array class so I will talk about that later what else did I want to say at this point it would be very important thank you so let's see I've put a minus that's the problem there's either a plus or a minus depending on okay so we've integrated so if you put it there you do one by parts it's actually plus it's plus okay so we'll talk more about open problems and about the Ray's solutions but let's go back to the energy inequality I will discuss this right now I will so I will actually return to this so I'll return to this later on maybe on Wednesday but I will discuss this now okay so the point is for a Ray's solution this is not you cannot write this so I was not even able to justify this equality in the sense of distributions and this is of course related to what you were alluding to in terms of the non-linear term so what can you actually justify what you can justify I'll write somehow not in chronological order of what people did but I really like this paper of Duchon and Robert from 2000 who actually wrote down what you can justify and what you can justify is the same thing plus another term so let me write down it's a bit repetitive but okay so that's the same left-hand side and let me now put everything on the right-hand side so you have the work of force you have the energy dissipation rate the classical one and there's one more term which is a distribution D of U and U and I will describe this distribution to you now so D U so what the show and Robert did they revisited an old identity in turbulence which is called the Karman-Horvath-Monin relation which is again about writing something in divergence form so they revisited it and they established rigorously that this D of U you can write as the limit is L goes to 0 1 fourth so this is a function of x and t it's it's not a function it's a distribution of the average over the whole space and I'll write some things now and I'll explain what they are the gradient of a modifier of z times a velocity increment and then the square of this velocity increment the z so let me define things now where so the velocity increments throughout my talk so for z in R3 will be just U of x plus z t minus u of x t versus a velocity increment based on the point x t by the spatial distance z and this phi L it's just an approximation of the identity so it's L to the minus 3 phi of L inverse z and phi is just some bump function of unit mass so it's positive even you can even by even I mean radial of unit mass you can even do it of compact support if you want so this is the identity established by the show and Robert that for any weak solution in the class this is actually an equality in the sense of distributions every everything here is a distribution but you were missing we were missing before this term which is the dissipation possibly coming through the nonlinear term and notice that this limit has a very well-defined meaning it's a limit as L goes to zero of what right you so remark by interpolation if you is in the energy class L infinity L2 intersect L2 H1 in three dimensions H1 embeds in L6 and then you just interpolate u belongs to L 10 thirds in space and time and ten thirds is more than three so in particular this is a very well defined L1 object it's in fact more than in L1 so this mean this has a meaningful limit in the sense of L1 functions and in fact throughout the proof of the show and Robert this is used throughout the proof that you is Intel ten thirds so that means that any trilinear combination has a very well-defined limit in the sense of distributions okay so you can ask yourselves when is this distribution the zero distribution and it's not hard to imagine you see so without this gradient is a gradient in Z of course without this gradient you know this is an approximation of the identity so shifts are continuous on L3 and this would go to zero for any L3 function without the gradient that gradient will cost you however a 1 over L if this approximation of that entity's compact support then you know that the absolute value of Z is somehow related to L so you could in principle divide here by Z to the one-third you can divide here by Z to the one-third and then you can put absolute value of Z there and now you see the ZDZ doesn't have scaling so this is now a good kernel approximation of the identity so if these objects are uniformly in L3 then again you can hope to have zero limit and this is of course going to be related to the whole discussion about on Sager's conjecture and so on okay what else do I want to say two more remarks D of u nu is positive for any solution so for any layer a solution this dissipation is a signed distribution and this has to do again with the energy inequality and the last point I want to make is that if this u nu you have a sequence of solutions to 3D Navier-Stokes if you would for some reason know that they converge as nu go to zero to a solution u which is a weak solution of Euler in what sense converge strongly in L3 then so this is a weak solution of Euler then you can prove that these measures convert and as a consequence this measure for the Euler solution also is signed so if you can hope to construct a weak solution of 3D Euler as a limit of the Ray's solutions then you should get measures so we solutions which are called dissipative because this term will not exist for Euler and this term will have a sign I'm sorry that's a minus it's very important because otherwise saying that this measure is positive is a nonsensical sorry so you will get what is called the dissipative solution of Euler because in the absence of force it only dissipates energy okay so in view of this result of Duchamp and Robert let's write the correct energy balance which is d by dt of the energy is again the work of force and then the usual dissipation and the mean of this measure so the total dissipation of energy is coming from the combination of these two things and in turbulence I will discuss I will discuss these brackets what they mean it's an average average of this quantity is typically denoted as epsilon nu and it's what is this average so for a theoretical physicist trained in statistical mechanics this is an ensemble average you do this for all initial data you put the measure on the sense on the set of data and then you average with respect to that measure in an experiment what you can do is of course just run the experiment for very long time so then this would be a long time average 1 over t integral 0 to t and of course in their in physics and statistical mechanics you sometimes just make an ergodic assumption you know impromptu and then they're the same actually okay so then the average against the ergodic invariant measure is the same as the long-time average and I will for the rest of this physics see part of the talk use these brackets to denote one of these way of averaging we could just think of it as a long-time average so the point what I'm trying to make here is that the kinetic energy you have input of force and then output of force which is on average this epsilon 7u and it composes of two things that's what I want to emphasize so this is the mean energy dissipation per unit mass and the 0th law of turbulence which is an expression that I have learned was coined by my colleague Srinivasan states that when you send Reynolds number to zero which is the same as sending new Reynolds number to infinity which is the same as sending new to zero while keeping UNL fixed this epsilon new does not go to zero so limit epsilon new strictly positive and this is more commonly called anomalous dissipation of energy of course experimentally you can measure things like this and in particular this mean energy dissipation rate per unit mass let's say in an experiment in which you have a laminar flow coming and hitting maybe a ball then it goes around then turbulence you can measure the drag on this ball and the drag is proportional in terms of UNL to this energy dissipation rate per unit mass and you will see maybe sometimes a picture in which the drag coefficient is a function of the Reynolds number so the speed so how do you alter the Reynolds number you increase the speed of the incoming velocity because you don't want to change new you have one fluid in your tank and if you hear you have drag coefficient and here you have Reynolds number a typical picture you will see something like this so you can see something a curve which is somehow wants to be one over the Reynolds number also known as new which of course makes sense somehow for if the flow is smooth this should go to zero like new and this should be zero but then around 10 to the 4 you have transition to turbulence you have boundary layers appearing you have what is called the drag crisis here and 10 to the 5 again it stabilizes and higher and it really seems to converge to a constant and this is one of the ways you can actually see in an experiment that this is 0th law of turbulence seems to actually holds tremendous with tremendously good error bars yes somehow we have a transfer of dissipation from the first term to the other one absolutely there's an interplay between the two terms and I should add that for me as a person working in fluids this somehow would be my dream to prove this so what I mean to prove this you're not gonna prove this maybe at the first shot with averages and for almost every daytime things like this just to give an example of a single data and a sequence of forces for which you can compute the long-time averages and actually check that this is not zero this would be maybe a challenge for a different generation but okay let's talk about math well not really I'll talk more a bit about physics and I'll since I mentioned this this thing I want to talk a bit about Kolmogorov who in the early 40s made predictions about homogeneous isotropic turbulence and I just want to mention one thing about it before I get to Onsager because we will in the third lecture again return to some ideas of Kolmogorov so I think it's important to know the motivation of why we even looked at this so Kolmogorov makes some assumptions so the first assumption of course is this one that this number called epsilon is positive that's an assumption and then he makes further assumptions that turbulence is homogeneous isotropic and he somehow self-similar in the inertial scale arrange so what do I mean by this so Kolmogorov introduces a length scale LD which is the dissipation length scale at which somehow the fluid starts to convert energy into heat and if you think about units what's the unit of epsilon by the way well we have written on the board the unit of new we can compute the unit of everything and this turns out to be u cubed just one second oh you cubed divided by L so if you have u cubed and you have L the only kind of way to define a unit out of this is to define the dissipative scale as new cubed divided by epsilon to the 1 over 4 so you see new cubed will have units of u cubed L cubed epsilon the L's sorry the use will cancel and you're gonna have L to the 4 so for Kolmogorov the one fourth power was the only one that could give you a unit of length and the inertial range is defined as all length scales which are like this much larger than the dissipative length much smaller than what is called the inertial length what is the inertial length imagine this force that I kept on writing F new is actually band limited it's Fourier transform is actually compactly supported and let's say we call LD Li inverse to be the largest active mode okay for all new so you want the force to not act at super high frequencies you want the force to only act at some finite number of frequencies you pick that frequency and you call the inertial length to be one over so Kolmogorov assumes that you have homogeneity what does that mean that if you look at the increment at the delta u xt and z and let's instead of z write z hat L so z hat is an element on the unit sphere L is a positive number so homogeneity has to do with the fact that the law in the probabilistic sense of this quantity is the same for all x in the torus isotropy says that the law of this quantity is the same for all z hat on the sphere and self similarity states that okay roughly speaking the law of the following quantities are this is the same if you put a lambda there then in law this should equal to lambda to a power times this and this should be true when both this and this are in the inertial scale okay so under these assumptions I will it's a bit informal at this point but I will mention a theorem in a second Kolmogorov makes predictions about structure functions this is the main goal so two types of structure functions sp of L which is just the L 3 L the p-th moment of delta u so what do I mean by the bracket here it's an it's an average over space it's the statistical average so a longtime average so it's an average over time and it's an average over all angles so it's an average over s2 okay so everything is hidden in this bracket so what is coming out is just a function of L and that of course is defined as the absolute the p-th absolute structure function and notice it has units of u to the p in fact the most beautiful result in turbulence has to do with what are called longitudinal structure functions in which instead of taking the absolute value here very similar to Duchamp and Robert formula d of u you take the average of z hat dotted the increment and then you take the square which is left so how should I write this ah let's just do this okay so you just look at the increments in the direction of z and the most beautiful result in Komogorov's theory which is sometimes called an exact result in turbulence says that the third order structure function when you send L to 0 has a limit which is minus 4 over 5 epsilon L so under this wiggle sign what is meant is that this holds for all L in the inertial scale then you send viscosity to 0 so in some sense this goes to 0 and then you send the inertial scale to 0 also so this holds at infinite Reynolds number as L goes to 0 and there's a theorem actually believe it or not about this there's a beautiful theorem by it has four authors it's bedrosian cotizelati punch and Smith and Weber what are they doing they're looking at Navier stokes forced 3d Navier stokes but you take this force F new to be a forcing which acts on finitely many Fourier modes but in time it's white so it's Brownian motions i.e. Brownian motions jiggling the first ten modes of Navier stokes so in this setting if your forcing is spatially homogeneous so the law is translation invariant then of course by the old results they can be traced back I guess to Ben-Sousson and Tema more earlier you can construct a statistically martingale solution of stochastic Navier stokes so it's a weak solution in the sense of both probability and PDE and now for such a statistically stationary martingale solution you have to make some assumption which is like a normalist dissipation of energy and the assumption that they make and it's in if and only if by the way so if the limit as new goes to zero of new but instead of writing the expectation of the H1 norm I'll write the expectation of the L2 norm notice no gradient okay so this you new now is a statistically stationary martingale solution so the law is independent of time I don't need to write time then you can of course compute sp of L add to it four over five epsilon L goes to zero and there's a limiting process here some limb and some limb in limb li goes to zero limb soup new goes to three sorry sorry this is very important sorry this is only true for the third order structure so this is a theorem right it's a math theorem it's still a conditional theorem but it's pretty remarkable because the condition is not that new times the expectation of the H1 norm squared over goes to zero but new times the expectation of the L2 norm square goes to zero in any physically meaningful experiment this is actually almost constant so new times that not only does it go to zero it goes to zero like new so this is a pretty good evidence for Komogorov's for fifth law and working in this stochastic setting allows you to make sense of these averages and all these beautiful things okay this thing okay now I need to start erasing maybe I'll erase that word and by the way when should I maybe in a couple minutes of course Komogorov made prediction about all structure functions and his argument was that all structure functions even the absolute ones should scale like what scaling should say so you should have units of u to the p epsilon times L has units of u cubed so this should be epsilon L to the p on 3 and what's really remarkable is that from this you know rather benign argument about scaling you get an asymptotic law which turns out to be pretty damn accurate for p close to 3 for p not very far from 3 if you look at an experimentally observed turbulence this is really accurate of course it's not very accurate for p away from 3 but it's pretty remarkable that's still Komogorov for p close to 3 are mostly right and it's not always right if you look at zeta p which is the structure function exponent so you would do SPL you take a log you would divide by the log of epsilon L and you would take a limit as L goes to zero okay you can define this number zeta p which is called a structure function exponent then one beautiful picture that was compiled by Uriel Frisch in his book based on all sorts of experiments at that time was plotting zeta p as a function of p and of course Komogorov has the line p on 3 when p is 3 all experiments that I know of are on the money which is to say that the four-fifths law is pretty damn good and then for higher order moments this starts to go a bit down and it actually it's not even universal there's all sorts of experiments which start to predict the lower structure function components less than 3 and what's even more interesting since we are in a PDE setting I think we all like HS when p is 2 that's of course related to a sobolev space and what's really remarkable is that almost all experiments show structure function exponents above 2 over 3 okay so here would be 2 over 3 and this has to do with intermittency and I will talk about intermittency when I will mention the convex integration constructions for Navier stokes but for the moment since we were talking a lot about open problems it's today it's an open problem to construct a weak solution of Euler whose zeta 2 is more than two-thirds what does that mean okay if I write the space of space of course the norm is roughly equivalent to this plus the soup over z positive 1 over z to the s and then the velocity increment z in LP in x and if you look at the absolute structure function so this is a velocity increment so if you look at SPL divided by L to the well zeta p and we're taking the limit as L goes to zero this would remind you of the best of space B infinity zeta p I'm missing a cube somewhere you know the the structure function is the piece power so for the for the B best of space SP I should have length to the s and it's to the p so SP so I want I want I guess is this divided by p because otherwise it doesn't work so when p is 2 and this is where I was going for this is best of zeta 2 over 2 to infinity and zeta 2 more than two-thirds means that this number is strictly larger than a third and this is one of the most beautiful open problems in the field is to show that there exist weak solutions of Euler which go above the Kolmogorov scaling on the let's say HS scale right best of two infinity HS let's not discuss the difference there this is I think one of the most beautiful problems and how far can you actually push this number there's an old theorem of Suleyman Frisch which says that you cannot push it more than five over six without something strange happening but what's the most intermittent weak solution of Euler that you can possibly construct this is how I would phrase this question in the sense of physics E language and this is one of the most beautiful problems left so I think we should take a break but I just want to say a little bit what I am going to do next so we've discussed a little bit this Kolmogorov picture I didn't say much about intermittency because we would spend too much time so now I want to tell you about the on-sugger part of this picture and then I will start to mention some math theorems okay so I'll we take a short break a little puzzled by the signs of all these quantities because S parallel is something thank you okay then it makes sense that well it's kind of weird right why would the cube of something have a sign yeah yeah I agree but there's no there's no contradiction yes because that's the absolute epsilon is positive and that's I wrote their absolute structure functions but you're absolutely correct this was a parenthesis this is one of the remarkable things it says that on average at small scales this cubic term actually has a sign which okay what it really says is that the energy transfers from high scales to low scales and that it's on average it's a one directional cascade that's what it says I do want to mention before I switch to on-sugger I can't abstain myself from mentioning something about intermittency it's very hard to prove theorems about intermittency because even defining it is a strange thing intermittency really in a broad sense means deviation from Kolmogorov okay but there's a very nice paper of Chesky-Dove and Shvitkoe in which they attempt to define intermittency in terms of saturation level of Bernstein inequalities so you can look at the little bit of the composition of any flow and what people in turbulence sometimes measure is they measure what is called a skewness factor they take a structure function different from three they raise it to the correct power they divide by the third order structure function to the correct power and they check how this scales and if there is deviation from Kolmogorov this will capture it so these guys they define some volumes which they call active volumes so imagine you have the torus and then you do an atomic decomposition or some wavelet decomposition and then you look at the boxes in which there's a lot of energy transfer to small scale the union of those boxes define a volume and you can try to estimate the volume of that volume and these guys define it like this so it's the average so by average I mean spatial average and long time average of the little Paley shell squared power 3 divided by the little Paley shell cubed right so this is like a best of a L2 based best of space relate it's related to this is related to an L3 based best of space and you can define this for all Q and again this is meant to be if you look at scales of size 2 to the minus Q you divide the torus into boxes of size 2 to the minus Q this is somehow the volume at which stuff happens and then you can define an intermittency dimension to be 3 minus limit Q goes to infinity so this is related to 2 to the Q so let's take log base 2 of L to the minus 3 V Q okay so this turns out is related to the old work of Frisch, Suleyman and Elkin about the so-called beta model for intermittency in which they propose as a way of showing deviation from Kolmogorov theory what is called a bifractile picture that there's a certain proportion of the energy which gets transferred to smaller scales at all scales and this dimension is a number which you can define the reason I bring it up because this is just a number that you can define for any flow and it really measures saturation of Bernstein's inequalities if you if you imagine that you it's sorry delta Q of U so in this annulus you have a volume fraction of active frequencies maybe this fraction is restriction of a plane or maybe it's a full three-dimensional thing or then this number will change and these guys prove a very beautiful theorem that if D is more than three-halves then the annular ray solution with D more than three-halves is smooth so if you take a ray solution compute this number and if you somehow get more than three-halves that's the same as regularity and turns out that this condition implies pro-desering criteria it implies Bill Katomayda for 3d Euler somehow it implies them all it's somehow related three-halves is related to scaling what do people in turbulence actually observe people actually in turbulence measure these things and there's a long history of measurements my colleague Srinivasan again told me a long history but there's a big simulation at very high Reynolds number that was done in Japan in 2003 by a group of Kaneda so direct numerical simulation and D is 2.7 something so when people observe turbulent flows they observe in fact smooth solutions the thing that does not remain bounded is that okay so when people observe turbulent flows this is computed on a smooth object in particular the Dushon-Roberts measure is zero what what's what somehow goes on in this limit is that you don't have bounds uniform viscosity right so these bounds degenerate in viscosity so that's really what's happening and I only brought this up because in my view this discussion of turbulence is not a discussion related to the clay problem which is the blow-up of 3D Navier-Stokes somehow these to me these are completely distinct problems this is however related to weak solution of 3D Euler not of Navier-Stokes and this is what I where I want to go now to Onsager picture of ideal turbulence so instead of looking at Navier-Stokes Onsager well he did look at Navier-Stokes but he wanted to see if it's possible in an Euler flow to dissipate energy by itself without any viscosity and now that we've introduced the measure there so we know that the kinetic energy density obeys this for an Euler flow and what Onsager is really talking about is the fact that this measure is not the zero measure that it really has mass in it of course Onsager didn't write this measure he didn't write it in this language what he really did is he did actually a four year series he truncated the four year series at the finite scale he did a computation and he introduced the object called flux of energy through frequency shell at a certain frequency parameter let's call it kappa okay so you look in Fourier space you look at let's say the boundary of the ball of radius kappa and you're trying to measure how much energy is flowing out through that so let's do this computation really in the spirit of Onsager and let's forget a little bit about this so to do this picture we need some kind of again what Onsager did is he did a truncation in Fourier space so let's write projection less than kappa to really mean a truncation in Fourier space it would be much better to use little Paley so kappa if you want to think about should be two to the j for some j in particular it commutes with time derivatives and with spatial derivatives okay so what I want to do is I want to first say that the energy at time t is of course by dominated convergence theorem the limit as kappa goes to infinity of the kth energy where this is just the energy in the projection okay just DCT dominated convergence theorem says that if u is l 2 this is true okay so if you want to look at energy conservation let's look at the energy minus the energy at time zero now of course this is going to be equal to the integral from zero to t of dt and I should I'm missing something yes okay so this is fundamental theorem of calculus which is of course justified because you the projection of u is smooth now here I can just use Euler so let's put this first so this is minus the projection of u tensor u I'm forgetting the divergence maybe let's write let's do this gradient of P and what am I missing the force so these commute so so these so we can integrate by parts this term will become a divergence which is zero this will not become a divergence it will become something else how about this one this will just be what it is we're not going to do anything with it because again remember that for k large for kappa large bigger than some fixed inertial frequency the force is what I'm trying to say is that we assumed anyway the force was frequency localized okay so this is a constant doesn't even depend on kappa okay so this times this will converge strongly to u dot f okay so let's write this like that okay so let me not write this let's the same kappa is large and then I still have plus I'm taking this divergence by parts becomes a gradient so here we have gradient of u so this is a matrix which is contracted against this matrix now of course if this Fourier projection operator would go on this u directly this would be zero because of incompressibility but it doesn't right I mean projection operators or in terms in general Fourier multipliers don't commute with nonlinear objects but it means that we can subtract for free zero so the zero we're going to subtract is this so what we've done here is we've subtracted zero because the way that this contracts now it's on smooth functions and incompressibility says that this term that we've added artificially is zero so you can define now the flux density little pi sub kappa of x and t and you can define the integral of that to be capital it's common to use why did I keep these I didn't have to keep them so you can define pi kappa to be the average flux of energy through shells now of course in terms of what we've done you can now send kappa to infinity and theorem essentially due to on saga but of course you didn't write things exactly like this but pretty close in the absence of force kinetic energy is conserved if and only if the limit as kappa goes to infinity of the flux is here so again this pi sub kappa is the flux of energy through that it will through the frequency of order kappa and what we have proven upon sending kappa to infinity is this theorem but on saga did a bit more he predicted in some sense when this is here so I'm going to use a simple computation we can write down a point wise identity for what this is actually equal at every x and t so if you think about what this really means you're gonna get one contribution when both of these you keep only the high modes so the projection on strictly more than kappa of u tensor projection of strictly more than kappa of u and there's a remainder term and this remainder term has an explicit formula basically it's the integral of what is it kappa to the minus three it's the inverse Fourier transform of pi sub kappa so it's kappa to the power three some kernel H which is basically related to the inverse Fourier transform of your so this is the kernel you get from inverse Fourier transform of projection listing kappa and then increment of u at distance z tensor increment of u at distance z okay so the point is this is defined only in terms of increments this is u of x plus z minus u of x and this is u of x plus z minus u of x so in particular by the way this identity was written down by Constantine e and tt but of course it's just a bony decomposition of the little Paley projection of a product and on saga had something very similar except with Fourier series so it's a bit harder to untangle but he had something very similar so let's bounce these two terms so if u has alpha regularity what do I mean by this that an increment scales in a certain sense like z to the alpha so in what sense should this increment scale well we have a trilingual term so if I want to put all the u's in the same LP I have three u's so I should put things in L3 so if u has alpha derivatives in L3 that means that when you divide here by z to the alpha z to the 2 alpha and now you take the L3 halves norm L3 halves norms means you're putting L3 here L3 there and L1 on the kernel the best of space exactly says that the L3 norm of this is bounded independently of z this is bounded in L3 independently of z but the kernel produces a kappa to the minus 2 alpha because this is a well very beautiful kernel okay so that's a bound on the other hand this is really easy we just put the L3 L3 and the fact that you only have high modes means you can translate any type of regularity into decay of the little paily pieces so this is exactly the same proof it's not the same it's easier than this lastly I need to bound this guy so we've bounded this and we're left with bounding that so I wonder I didn't choose the correct order and lastly the gradient of the low modes of u so if we put those things in L3 halves we said that we're gonna put this in L3 the gradient of course would cost a kappa but then we have alpha derivatives so in total what we have shown is that the flux is less than kappa to the 1 minus alpha and minus 2 more alphas so you get 3 alphas and this is exactly the computation from the paper of Constantine E and TT everything I've just shown here and as a consequence the limit is 0 if alpha is more than a third so this is the proof of energy conservation a la on Sager but really in this modern language as done by Constantine E and TT and it really shows you that if you have more than a third of a derivative in L3 essentially then you conserve energy now all sorts of questions arise after this and in fact on Sager was a very smart man he didn't just say this he also said that there is the opposite which is true and some people refer to this as the on Sager conjecture which is now in the way I'm gonna write it as a theorem and the old way that people wrote this was not using Bessoff spaces which is the correct way to write it but using Halder spaces and it had two parts rigidity which states that for any weak solution you of 3d Euler of course which belongs to again I'm gonna write it in this old-fashioned way with Halder spaces c alpha with alpha more than a third any weak solution in c alpha with alpha more than a third energy is conserved or if you have force the energy balance holds slash conservation if you don't have force and B flexibility for any alpha less than a third there exists at least one weak solution of 3d Euler in this space which dissipates energy again in the absence of force so maybe let me write both statements in the absence of force so that it's cleaner and what we have done here is we've done the proof of the first part of the conjecture and again we have discussed the fact that these spaces should be really l3 in time best of alpha space that's what you really need for energy conservation now several questions appear I have omitted on purpose the number one-third because funny things happen at one-third so let's look at one cautionary tale okay before that look at the quantifiers in the flexibility part for any alpha that exists a weak solution is it true that for any weak solution in c alpha energy is dissipated absolutely not take a shear flow which is c alpha it's a shear flow it's a stationary solution it does not dissipate anything so of course there will be outliers in c alpha in which energy will actually be conserved turns out that's a very thin set it's a meager set in terms of bare category second if you take 1d burgers the proof of rigidity works the same and you can prove the same theorem but we know that in 1d burgers you have energy dissipation through shocks in which space does a shock live and the answer is a shock lives in b one-third three infinity and emphasis on this infinity you really have that every little pail apiece even as you send q to infinity carries constant mass so how about Euler if I have a solution in the endpoint space does it conserve energy if I pick a random one probably not but for instance if you look at the vortex sheet so you have a nice and smooth sheet and let's say you have a flow going like this on the top and like that on the bottom so that you have two laminar flows then the vorticity is a distribution supported on this sheet this is why it's called a vortex sheet if this is smooth then in this direction you have kind of it's everything is regular right but if you go in this direction it really looks like a shock it looks like plus one and minus one this also lies in there but it conserves energy although in and this was proven by Roman Schwitkoe and there's a physical reason for this you know in a shock particles come and they can jump across the shock right they start not jump across they jump into the shock it's it they alter the speed of the shock in a vortex sheet particles in here and particles they're never exchange sign they stay on top or they stay on the bottom they never exchange so there's a physical mechanism why at the endpoint very curious things may happen and this is one of the other very interesting problems left in this story that I that we're talking about what happens at one third is it more like this is it more like this what's the generic one and so on so as I said we have proven this and the way this is stated this is also a theorem now due to Philip I said well Philip I said proved this theorem he constructed solutions with compact support in time so in particular they do not conserve kinetic energy and then jointly with the leilice in security and backmaster we have proven this statement about the dissipative weak solutions so the way it's written the on stagger conjecture is closed and I will what I basically want to tell you the story about how how it got to this point okay anything else here I do want to say one more thing since we discussed so much about this flux so I'll bother you a bit more with the flux are these on stagger and komagorov's stories related and the answer is yes and the flux relates them and again you have a nice discussion of this in Uriel's book on turbulence so let's again say that we have a stationary solution of both Navier Stokes and Euler so that means that when I take this averages the value at time t and the value at time zero is the same so then what's left of the energy balance well in Navier Stokes what's left of this computation is that the flux let's put an upper new just so that we know it's Navier Stokes through shells at level K on average plus new times the average gradient is balanced by the force and the only thing that I've assumed here is that these brackets which maybe do not a long time average or something forget the left side you have statistical stationarity in time how about Euler in Euler we've just having this okay what happens when we send K to infinity kappa sorry well this is very clear what happens you have strong convergence because you have well f is fixed how about here what should we write and I've just erased it but when kappa goes to infinity if you again think of this to show and or bear energy balance it's exactly what we called epsilon upper new because we're putting these average so all these limits are as kappa goes to infinity how about in Euler well here you can give it two different meanings it's either that limiting flux or equivalently the average of the Dushon Robert measure because it's the same so now send new to zero so so far kappa to infinity in Navier Stokes and in the oiler Kolmogorov says that when you send new to zero this has a limit but and now comes the interesting punchline when you send new to zero here if the statistically stationary solution of Navier Stokes converges just weekly in L2 because this is a smooth function it's frequency localized just weekly in L2 if it converges to a stationary statistically stationary solution of Euler then this goes to f dotted with you but these two are equal so that means that Kolmogorov's Epsilon is exactly matching the limiting energy flux or the average of the Dushon Robert measure so these two stories of Onsager which really deals with this flux and the story of Onsager which deals with the positivity of this Epsilon is actually the same story under some mild assumptions about the existence of stationary states and their convergence weekly in L2 so we have this beautiful picture that all the predictions of Kolmogorov about structure functions and so on we can also try to convert into Euler and in some sense this is the connection between the Kolmogorov picture and Onsager picture okay and that's all I'm gonna talk about about fluxes but really all I want to say is that it's somehow two coins of the same story by the way you see I'm again saying that I'm not gonna talk about fluxes but I want to say more I guess in this crowd I can say more because you're familiar with little paleo decompositions if you look at flux through 2 to the J there's a paper by Cheskidov, Konstantin, Friedlander and Schwitkoi which do a much more careful bound than this they really do the Bonilla decomposition properly and they bound things properly and then you get a very interesting bound sum over I 0 to infinity so this is a little paleo decomposition from negative one to infinity we don't go to zero because we are on the torus of zero mean there's a kernel of I minus J absolute value so J is that J I is summed and here we have 2 to the I delta I u l3 cubed so this if I take supremum over I is exactly what defines the endpoint space if I take supremum what is this kernel this kernel of any number I don't know why I wrote it like this there's no absolute value okay this could be negative or positive is 2 to the minus two-thirds Q for Q negative and 2 to the minus four-thirds Q for Q positive so this is really an exponentially decaying kernel both when I is much lower than J and when I is much more than J and it has different numbers a priori so two things this bound says because of the this exponential kernel that the flux through through to the J is mostly affected by little paly shells for I close to J the flux is somehow local it's very improbable that something will come from infinity and talk to you second the forward cascade is stronger than the backwards cascade so that on average you have a forward cascade okay and this is an a priori estimate for any weak solution and okay that's all I wanted to say that as far as I know this is the best estimate on the flux that I'm aware of okay so now we will really start to talk about basically the flexible part of on-suggest conjecture so in what way can you prove flexibility in at least three ways that I am aware of they all work through convex integration or a method related closely to convex integration but they prove various degrees of things so a you could just try to show that energy is not conserved then what you've done is really constructed something strange so it's not exactly what on-sugger wanted with dissipation of kinetic energy but it's definitely going to give you let's say non-uniqueness because maybe you can construct infinitely many of this so this will automatically imply if you can do many of them non-uniqueness how do you do this in practice you either do compact support in time that's one way to prove that energy is not conserved and then if it has compact support in time then at some point is the zero function so zero is a solution and then you've constructed another solution so it does improve in imply non-uniqueness or you can prove that the energy doubles then you have a solution which increases energy and then maybe you construct one which decreases energy so you have again two solutions you could try to prove that there exists infinitely many dissipative solutions so maybe you can even do more you can maybe even prescribe their kinetic energy so statements of this kind work as follows you give me a smooth positive function of time I'll give you a weak solution in a certain regularity class whose L2 norm squared divided by 2 is exactly that function so that has a prescribed kinetic energy and in particular if you take your energy to be decreasing then you've constructed dissipative solutions or you can have much wilder type of flexibility which is related to Gromov's H principles so that means that maybe you can prove that the set of weak solutions which let's say dissipate energy not only are there infinitely many they're dense in the set of all functions with a prescribed mean are dense in some very big space that space could be the space of subsolutions which I will reduce later on okay in these of course may seem like they're increasing difficulty they are a is easier to prove B is a bit harder C is a bit harder but in some sense they are you proved using the same technology and in terms of proofs the first one was an part a type theorem due to Vladimir Scheffer like many things in PDs they were invented by Vladimir Scheffer who I think only had 13 papers and he constructed the weak solution compactly supported in time compactly supported let me write type a just L2 so he found some solutions which had which were square integrable in space and time but in addition he knew from the construction that they were compactly supported actually both in physical space and in time he did this on R3 Scheffer's construction is a bit hard to read to say the least then schneelman he had two papers in 96 and in 2000 and I will cite the 2001 because it's the first example of a B type result schneelman proved that there exists actually dissipative infinitely many dissipative solutions which have bounded in time kinetic energy because they're dissipative but there is still L2 in space so both of these results somehow have this little issue here that they're L2 in space which is great if you're just caring about kinetic energy but it's not so great if you want to get to herder one-third and in terms of the proofs it's not clear at all if they produce anything in LP for P more than 2 so it's it's not very clear what follows from there by the way I should say that in terms of dissipative weak solutions they are in principle different from just regular weak solutions because you have weak strong uniqueness if you give me to if you give me a solution a weak solution of Euler whose energy is decreasing and you have for some reason a strong solution with the same data then they must be the same okay so you have this weak strong uniqueness type result and this is why in principle there should be more obstructions to that thing okay and then the breakthrough or the first breakthrough came through the work of the Lelysynsakilihidi and I'll cite at this point one more paper and I will tell what the difference is in the first paper they basically realized that these constructions fall under the same umbrella that was known for a long time in geometry in the world of differential inclusions and was titled convex integration and they managed to basically realize that this method of convex integration from differential inclusions could be applied to the Euler system so they prove a B type result which of course is stronger than an A type result and a C type result so they prove all these types for bounded solutions and that was the big thing and this was not just basically the importance of this result is not just that it achieved that infinity it's more the proof it realized that you can place this these very non physical constructions for 3d Euler into a rich and old and established geometrical framework of convex integration so instead of so I will tell a little bit about this result but really I would like to emphasize at this point that this proof gives you L infinity but not more the convergence you get so the construction is because something converges weak star in L infinity this will never give you C0 so this method can't give C0 and I will try to somehow do a cartoon of that so I want so I want to basically tell you the theorem they've actually proven and then instead of giving you the proof I'll give you the proof of a toy theorem but I want to first not start directly with the toy because it's gonna seem out of the blue so I want to first really say what theorem they proved so let me discuss a little bit first the concept of a Reynolds stress and at some point somebody should stop me because I'm gonna keep going okay no this will take probably at least 20 minutes so before tomorrow I will give you an exercise find all not all find as many as you can you 0 1 into R such that you is bounded and well this is stupid you affects an absolute value is 1 for almost every x this implies the other so let me erase and note if you is continuous there are exactly two such functions right one is plus one one is negative but if it's bounded there are infinitely many and in fact they're of bare category the one which is many of them and this is the toy example that you can also find in the paper of Camilo in Laszlo so I will start the next lecture by writing down what they have actually proven and I'll give you the proof of this as a toy so this is toy convex integration really and then I will once to go through painful details of the big step in this program which was to get from L infinity to C0 it may not sound like much but that was a big conceptual step and then I'm gonna show you actually convex integration in action in C0 and that's not gonna be so toy like this it's gonna be a bit more intense after that there was a race to get to see one-third and I will leave it to Tristan to tell you about that race and on Wednesday instead I will go back to Navier Stokes so again the plan for tomorrow is to tell you a little bit about L infinity and a lot about C0 I'll let Tristan tell you about C1-3rd and then on Wednesday I will talk about Navier Stokes and I'm sorry for going a bit over time. So we'll resume at tomorrow at 10. Thank you very much.