 Welcome back to GSS. Today is the most happy day of this semester for me because this is the first analyst. So just a quick announcement. I'm trying to organize a meeting conference. So if you want to speak, just let me know. It will be a 30-minute talk format and you'll be talking about your research. We want to get to know about everyone's research because as a department, I think that we don't have enough communication with each other to know each other's research. So this, I think, is a good opportunity for us. Okay. So let's get back to business. Antoine will tell us something about fluid dynamics. So the PD we're going to look at today are the PDEs or the 3D compressible Euler equations. I'll explain what, well, you know what Euler is. You know what 3D means. I'll explain what incompressible means. This is fine. Thanks for this. Sorry? All right. Okay. So what is the PD you want to solve? Okay, this is better. So you fix some time horizon t. You fix some initial condition u0. We're going to work in the torus just to not worry about sort of integrability stuff. Just work on something that's flat but compact. So this is some vector field. So this is the data we're given. And then we seek a velocity field u and some pressure. So these are our two unknowns. Well, two unknowns, but there's really four scalar unknowns. And we want them to solve, well, 3D incompressible Euler, which is dtu plus u dot grad u equals minus grad p. So this is on zero t cross the torus. Then we want what's called incompressibility. The variance of u has to be zero. Again, on zero t cross the torus. And we have some initial condition. Our velocity at time zero is as given. I'm not going to jump into discussing, well, the goal for sort of the, the meet and the interesting stuff about the talk is discussing sort of interesting solutions to this and interesting properties of solutions to this. But I'm not going to jump into that right away. I'm going to spare a fair amount of time just telling you about where this PDE comes from in terms of the physics behind it and what is the physical interpretation of those things. It's easy to write down some random PDE and so on. It's easy to write down some random PDE, but I'm going to try and motivate why this one is interesting. Just to be sure, make sure we're all on the same page first. This notation there with u dot grad u thing. So this, right, let's take a step back. I said we have four scalar unknowns. We have four scalar equations, right, a vector one and a scalar one. So at least like that very rough check, we passed that check. We have as many equations as unknowns. Now, this thing, the ith component of that, well, is what you expect. You dot these two guys so you get, oops, sum over j, uj, partial j, ui. Because this term is the only nonlinear term in the whole PDE. So it's very important we better know what it looks like. I mean be clear on what the notation means. And then you'll see that as the talk goes on, we'll pretty much forget about the pressure for most part. And the reason is that really we can recover the pressure in terms of the velocity once we know it. And why can we do that? Well, if we take the divergence of the first equation, this term drops out because u is divergence free. When you take a divergence here, when you hit that guy, you'll drop out. So you'll just pick up one term from here and then you'll get a laplacian of p. So what it comes out to is that when you take a divergence, you get that minus laplacian of p is equal to what comes out to this. So basically a square of the gradient of u. The exact form is not too important. What matters is that we recover p from u. And for example, it looks funny that we don't have an initial condition for p. It is because it doesn't matter. We just recover it from u anyway. And the pressure has sort of a physical, I mean this is just saying okay we can get rid of it, but there's a physical reason why the pressure is determined by the velocity. We'll get to that when I say a bit more about what this incompressibility condition means. The semicolon stuff. Yeah, so if I have two matrices a semicolon b, I'm just summing over ij, aij, bij. That's the standard inner product on matrices. Good question. Okay, so this is the p we're looking at. Where does it come from physically speaking? So as Sun said, this is a talk about fluid dynamics. So this models the dynamic of a fluid. And if you start with some blob b0, you blob a fluid and you let it flow. And it's called the flow map fancy f, also known as phi. Then you blob at time t is there. And what you really want your flow map to satisfy, well, you want it to flow along the velocity you're given. And at time 0, you want it to just be the identity. So this just describes the dynamic of our blob of fluid. How does the PD come out of that? Well, where's the color chalk? There it is. So if this is how we're modeling our fluid, while we're describing our fluid, where do these terms, so where does this dtu plus u dot grad u come from? Well, if I want to describe my fluid and I want to keep track of the velocity, I need to see how it changes over time. So let's just compute that. If I have some little point in my initial blob p, and it goes along with the flow, and it ends up at some, let's call it x, which is phi of that point p at time t, then I want to know how u evolves at that point. So if I look at the derivative of u at tx, this is really the derivative at u of t phi of pt. Right? And so when you differentiate this, you can give this to a cult, I guess three student technically, and you can do it. So because you have your chain rule showing up here, so you get what? You get dtu at t phi, so tx plus then grad u times dt phi. So grad u at tx times dt phi at, okay. But this, this is the same thing as your, which we don't know about this. Yeah, this guy is the same thing as just looking at u at tx, right? So the point is that this whole thing gives you that dtu plus u dot grad u thing. So what is this guy? This is just the acceleration of my particle of my fluid, right? So we're looking closer to the most basic physical physics equation being Newton's second law, that you want to write f equals ma. You need to know what acceleration is. That's what this is. And so then, it's hidden over there, but we had this thing equals minus grad p. Grad p is a restoring force. Where does it come from? It comes from the incompressibility. So let's talk more about the incompressibility. Where does this guy come from? Well, let's say that we want to compute the volume of my blob. Okay, well, this is what, this is just the integral of one over the blob. Okay, fine. Which is, if you, since this is the image under phi, you just change variables back to b zero and you get the determinant of your phi showing up. That's over dp. So to know how the volume changed, I need to know how this guy changes. Let's just, again, something that a Calc 3 student, a dedicated Calc 3 student can do. Dtt of that grad phi. Well, hold up. Let's not do that right away. Let's talk about how to differentiate the determinant first, because it's cute. How do I want to do this? Let's do it this way. Okay, let's change color, because this is just an assigned. That doesn't really have to do with p anymore, but it's fun. I want to differentiate this guy at some matrix m zero in the direction of some matrix v. And I claim I'm going to get the determinant of m zero trace of m zero inverse p. Can you read this? Not really. Okay, yeah, fair enough. This was not the best idea. The formula is a bit obscure, but that's not meant to be literal. So I want to do the determinant and m zero trace of, yeah, I don't know about you, but the first time I saw that, I didn't know where it came from, but it's very straightforward if you write it the right way. So I'll just do that because it's fun. Essentially, all you have to do is remember that determinant is a nice homomorphism that takes multiplication to multiplication. And then you just need to differentiate at the identity. But then you have characteristic polynomial that'll tell you how to do that. So it's, the point is, this is pretty, deriving this is pretty cute because, let's write it this way. Yeah, so the determinant takes the group of invertible matrices to R with multiplication. It's a nice homomorphism. Everything's very good. And then the derivative of the determinant idea identity, the prime idea identity, you can compute this to be the trace, right? And this you just do because you do, yeah, the characteristic polynomial expansion stuff. So this is pretty, right, the determinant is the product of the eigenvalues. So the linear part is going to be just adding the eigenvalues. But this guy takes what? It takes the tangent paste to that guy. I mean, this is slightly overkill writing it this way, but it makes the picture clear. And it takes it to the tangent space at one of that guy. So then if I want to know what the determinant at M0 derivative determinant at M0 is, what do I need to do? And so this is going to go from, I'm differentiating around M0, the same thing. And I'm going to something around determinant of M0. And because these things are nice, you just know how to go from those to those. Here you just multiply by M0 inverse to go from M0 to the identity. And here you multiply by determinant of M0, which is what this formula there packages together, right? So because it's a homomorphism, you just need to do it identity. And then this is where the formula comes from. There's also a way to derive this. That's a horrible computation. And then there's that. I prefer that. So what do we care about this? Because we wanted to see how the volume evolved over time. We have to take a derivative of the determinant. We know how to do that now. So let's do that. Let's see what comes up. Let's do that here. So d dt of d determinant of grad phi. Well, it's right there, right? So we're going to get determinant of grad phi and then the trace of the inverse and then the time derivative of grad phi. It's just a chain rule again. Yeah. And the nice thing is you just compute whatever this is. And unsurprisingly, this guy looks like what? It looks like a time derivative, but there's a gradient as well. So d t of phi is basically u, the grad that will give you grad u. And this comes from basically changing variables in the sense that this whole thing comes down to grad u at phi. So this is very nice. I just have the determinant, the delta determinant is the determinant times the trace of this, which is going to be the divergence. So that's where the incompressibility comes from, because if I want to keep track of how the volume changes over time on my blob of fluid, I just need to keep track of the divergence of u. So that's where the incompressibility comes from. So let's write that. So we have that d dt of grad phi. Is that grad phi? And then the divergence of u at phi. OK, that's a dot and that's a composed. So what does that mean? It means that if phi at time t is orientation preserving, because I can't cheat it, right? The stuff that was over there was the absolute value. So it would be nice if that thing had a sign and then I've actually computed its derivative. Well, if it's orientation preserving, that tells me the determinant has a sign. So if this, then the volume of my blob at time t is what? The integral. Sorry, I don't want to do the volume. I want the derivative of the volume. We're doing fluid dynamics. We want to see how things change. So you put the derivative inside and you get the divergence of u at phi dp. And then you change variables. Oops, nope. We did that as well. We need that because we're going to change variables back. I mean, that's all you do in PE, right? You change variables, you integrate back parts, and things magically work out. So this is integral over bt of the divergence of u, just dx. So I have my blob of fluid and to see how it's going to change volume over time, I just need to look at the divergence of u. All right. And of course, I mean, here, this was just saying, OK, that I really want the determinant to have a sign. But remember, the actual PDE we look at, let's write this down. The PDE we look at phi is the identity at time 0, so determinant is just 1. So it's positive at time 0. And then if your divergence free, that's going to be preserved. So the point I want to make is that the point I want to hammer home is that the flow being volume preserving is the same thing as the velocity being incompressible. And so here, the other way is that if divergence is 0, then the determinant is constant. And it's 1 at 0, at time 0. So this guy is orientation preserving for all time, orientation preserving. And the volume doesn't change. So yeah, the punch line, this incompressibility that was in our PDE and that's a condition on the velocity is really the same thing as requiring that our flow be volume preserving, which is a more physical, like it's easier to, it's like a physical conservation. So incompressibility equal to volume preservation. So I'm sort of deconstruct, like trying to tell you where the parts of the PDE come from. Really when you derive it, you kind of go the other way around. You start talking to a physicist. He tells you, look, these things have to be conserved. Tell me what comes out of it. And that's kind of where the PDE comes from. But that would take longer to explain than to reverse engineer the PDE. So yeah, let's just recap what we have toward what the PDE means, or the physical interpretation of the PDE. So what do we have? Our PDE was that dTU plus U dot grad U equals minus grad P. And the divergence of U is equal to 0. So this whole thing was really the acceleration. And this is really some force that it's acting to make sure that our fluid stays, like the flow map stays volume preserving. That's really what this thing is doing. That's what the pressure is doing. Now, for those who've done some physics at some point, you'll be saying that there's something missing here. Newton's second law really tells me that f equals ma of no m left over. But that's fine, because as I was saying a minute ago, would you add these things from conservation laws? The most important one is conservation of mass. You don't want your fluid to just gain or lose mass for free by magic. So then if I have something that is mass preserving and volume preserving, my density is going to be constant. Our favorite constant is 1. So there's a 1 here. That's why this is f equals ma. This is a usual Newton's second law. OK. So this is a brief justification of the 3D incompressible Euler from its physical derivation. I know not everyone is a fan of physics, but everyone on board so far? So now that I've said a little bit about where the PDE comes from, let's start talking about interesting features or solutions to the PDE. And one that's particularly important from the physical perspective is, well, generally, things that are useful are conservation laws, things like preserved quantities. So things like the volume being preserved, that's useful. Another more important one is that if the solution is nice enough, we have something that is the kinetic energy that's conserved. So let's see where that comes from and what that looks like. Conservation of kinetic energy. So this is something physicists care about because if your model doesn't conserve energy, they're going to say it's bogus. And this is something mathematicians care about because conserved quantities give you control solutions which allow you to start somewhere when you're studying them or trying to just prove there exists. So let's say that we have two solutions. Well, we have solutions, a pair U and P, let's say they're infinitely differentiable, to make life straightforward for now. So we have a velocity and a pressure, both nice and smooth. Suppose they solve, let's call it E for Euler. So instead of the fancy mic, can we get a bigger board? I know this is not up to you, but I'm struggling. That's true. Oh, I can't solve this. Sorry? I can solve it so I can prove it's true. So let's say that we have very regular, as regular as we want solutions to this. And then let's see what conserved quantity pops out. The general way we get conserved quantities in PDE, or a very common way, is you multiply your PDE by the right stuff, and then you integrate by parts. In this case, we're not going to do anything fancy. You multiply your PDE, the first one by U, and you integrate by parts. So let's do that. And when you do that, you're going to get what? You've got to do it here. So you multiply by U and you integrate by parts. So you multiply by U, and then you get an integrated time and space. So you get 0T. I don't want to do this here. So integrate in space time, and you multiply by U. So this dotted U, and then you have your U dot grad U that comes from the PDE, and then you multiply it by U. And on the right-hand side, we just had minus grad P. So minus U dot grad P, and you integrate in space time again. So two of these terms are nice to just integrate by parts. You move the derivative on the other one, and you get, no, you don't move the derivative. You just pull the derivative outside. And you get, yeah, I don't need to integrate in time. I mean, it just makes it clearer to see what happens if we don't. So let's not do that yet. We'll do that later. So here, you just get the derivative of the integral of half U squared. That's the kinetic energy. That's the, again, it should be mv squared, but m is 1, the density is 1. So this is fine. So this guy, nice and easy, you integrate by parts, and you get a divergence of U, which is 0. So this goes away. Nothing to see there. This one, you have a little bit more work to do. So let's write it like this. Let's write it as U i d i U j U j. So then when you integrate by parts, you'll get two terms, one when it hits that guy and one when it hits that guy. When you hit this, you get 0, because it's a divergence again. When you hit the other one, so you get 0. And then the other one looks like what looks like, so U i U j. And then your d i hits the second U j. But if you rewrite this, this is just, this is exactly what we started with, because you have U dot grad U, and then dot it with another U. U dot grad U, dot it with another U. So that thing is equal to minus itself, so it's equal to 0. So I kind of gave it away with the title, but the kinetic energy is conserved, right? Now there is one thing, which is sort of what's perfectly innocuous when we mentioned it earlier. But this works because U and P are nice and smooth, because all this integration by part makes sense, because U is at least differentiable, and everything is nice. Now the question is, when does this, like how nice does a function have to be for this integration by part to actually carry out and be justified? And so how nice does a function have to be for the kinetic energy actually to be conserved? And this is not just a mathematician sort of trying to push the assumptions to the weakest possible thing. This is relevant for physics. I'm told it's relevant for physics when it comes to turbulence, because if so, yeah, here comes the horrible picture. But if you have an airfoil thing and you have air flows, it's sort of nice and good, and everything's good here. And then it kind of goes crazy here, and it becomes a horrible mess, which goes by the name of turbulent flow. And these things are, like the flow of this fluid is modeled by the Euler equation, the compressible Euler in 3D. And so what happens here with the solution being very irregular is related to what happens. So the solution is very irregular. You want to know at what point it breaks this conservation of energy. So let's have a look at that. But before I do that, yeah, so let's just state the question. But then I have to tell you a bit more about what do we mean by less regular solutions in the sense that we have a gradient of U in our PDE, so I would probably need you to be differentiable. Turns out you don't, so I should probably tell you what it means for you to solve the PDE if it's not even differentiable. So I'll do that in a second. Yeah. Is there a physical reservoir when you use TORS? Well, think of it, okay, so later we'll get, well we'll quote some very clever guys who get solutions that are compactly supported in space-time. So it doesn't matter if you're on TORS or on R3. Like it's, the point is we don't, you could, I'm fairly certain you could do R3 with the right integrability conditions. Yeah, because I think the point is that this is, you wanna highlight the regularity of you, not sort of decayed infinity. Like that's not the interesting, that's not the issue here, I think. I'm just asking like, physically. Okay, look, so physically once the energy is not conserved, we're in no man's land, right? Like it's not physical anymore. Energy is not conserved. Not me, like, what, like, basically? Okay, I don't think I do know of actual physical systems that are shaped like a three-dimensional TORS, no. But as a way of modeling what happens in the interior of fluid away from the boundaries, this is as good as we have, right? Because if you introduce boundaries, I mean, I'm kind of lying here because part of like, when I'm referring this example as an instance of issues with the kinetic energy we conserve, because here there is a boundary, there's stuff happening with that, right? But the turbulent flow can happen in the interior as well. It's just, this picture is more commonly seen. I think I kind of skirted around your question, but I don't know if... It's a math problem. And I guess maybe when solving numerically, you assume your life, for example, using fluorescees to solve it. You just assume the domain of TORS. You don't care about... You use periodic boundary conditions and things, yeah. Yeah, so I guess the point is it's not just mathematicians doing that, engineers have simulations to make, we'll sometimes just do that too, I guess. And also having boundaries makes it really hard. So the main question that will be answered by the end is how regular understand sort of smooth, like how spiky, how regular do you, P, have to be for the kinetic energy to be conserved. And Onsager in 1949, so a while back, conjectured, so Onsager in 49, he says that you being held are continuous with index one-third is critical. Inning anything more regular will conserve energy, anything less regular will not. So yeah, I should remind people what these held spaces are. Also because reminding, like recalling the definition will start giving us a hint as to why we should have a one-third. So recall for alpha strictly be written one, so in there, you can measure U this way, you can look at how U changes with respect, if alpha is one, you just have the usual Lipschitz stuff. The point is you wanna say that the growth of U between two points, distance D apart is like D to the alpha, right? That's what you're saying here. And remember, if I have a continuous, like a function, let's say everything on a compact set, so let's say it's continuous, it's derivatives continuous, then it's C0 alpha for all alphas, and then it's C01 is Lipschitz, which is in blah, which is, what am I doing? Yeah, this is fine. Yeah, so there's sort of intermediate regularity between continuous and continuously differentiable, right? So, which is kind of what we want, we want sort of a scale of regularity and we wanna hit just the right spot, which is critical for, on one side, energy is not conserved, on the other one it is. There's other fancier ways of writing these scales involving Fourier transforms and little wood-piggy stuff. This is just buzzwords for the analysts. Okay, why one third? And this is a very, very coarse explanation, but it kind of shows up in the proof of one direction of this conjecture anyway. Like for the showing that energy is conserved when alpha is bigger, strictly bigger than one third, the very raw heuristic I'll give now does sort of show up in the proof in a significant way. So, what is heuristic? Well, why one third? When we're asking how regular the solution is for the kinetic energy to be conserved, we're asking how regular does it have to be for the integration by parts to be carried out. The troublesome term was a nonlinear one, right? So, the troublesome term is this integral of u dot grad u dot u, whoops. And very coarsely speaking, we have one derivative and, well, one derivative for three terms. So, that's where the, let's say, that's where the one third comes from. And honestly, it's not, I mean, this is obviously very coarse, but it's not too far off. This is really what's going on. And that's kind of what you do when you do the Fourier stuff. You just, you know, you write this thing as a derivative that you can split into fractional parts and put on each guy. Sort of, kind of. I think I'll get to the proof of the energy being conserved when alpha is bigger than one third and you'll see this kind of popping up. Anyways, we have a problem because we want to solve a PDE that has grad u and we want it to be not even differentiable. So, what does it mean? Well, it means we've weakened our concept of a solution. So, I should define what that is. I was asking earlier if people are allergic to physics at questions, allergic to analysis or what's, yeah. You've had it in every dimension, right? I know that it's going to be on the next hand. Yeah, yeah. So, it's the one third in the bandage? I don't know. I don't know. I'm not gonna, yeah. I don't know. So, there is, I'll get to it later for the, at the end for the case where alpha is trick less than one third. I think they've only proved it in 3D and there's a specific reason for that. That isn't really this thing, that's something else. Yeah, so I'll get more to, I'm not aware of any dimension dependent issues at this stage. They are definitely cropping up later. And I'll point them out then. Okay, so let's define weak solutions because we wanna know, we wanna have an ocean of solution that doesn't require u to be differentiable. Okay, so, okay, I said we're gonna want u and p to be non-differentiable, but we're gonna start with them being smooth just to motivate things. Suppose we solve our favorite, okay, it's not there, so I should write it again. Dtu plus u dot grad u equals minus grad p. Diverance of u is equal to zero. Let's not worry too much about the initial condition right now, it's not, it doesn't matter too much for what we want to do. Okay, so if I have a nice and smooth solution, I can do the standard PDE stuff and multiply by something, integrate by parts, see what comes up. We did that with u, but now let's do it for some arbitrary test functions. Then for any phi test function, so yeah, compactly supported, say on like, well, I guess it all compact sets, that's fine. On zero t cross the torus into R3, so multiply the PDE by phi, integrate by parts, see what pops out. So you do this and then, yeah, actually let's even forget about zero t, let's just do it on R. Let's forget about the initial condition for now. So you integrate by parts and you get what? So you're integrating over R, the torus, you have dtu dot phi, and then the troublesome term, so u dot grad u dot phi equals minus phi dot grad p. So we basically did this earlier, right? So this guy, you get what? Yeah, so okay, this guy's not compact anymore, so I want this guy to be compactly supported. So then this you can integrate by parts and you get, you just shift the derivative, nothing too fancy, I mean, nothing fancy, but now I don't want u to be differentiable in time anymore, which is kind of the whole point of doing this, so that's nice. This integrated by parts and you get the divergence of phi. And this guy, before doing this, I should probably, yeah, let's write it out in coordinates. So this is ui partial i of uj, then you have a phi j. So yeah, what's gonna happen when you move this over, when you hit the ui, it goes away just as before because u is divergence free, and then when you hit this guy, I want to just get a derivative there, so you're gonna end up with ui uj di phi j, which we'll write in a fancier way as u tensor u semicolon grad phi. And so this is a typical trick you do to make sense of solutions to a PDE that are not differentiable, you just move all the derivatives on some arbitrary test function. So if we add everything up, so phi was arbitrary, which means we can pick it to make the equation look nice if we want to, in particular, remember that p doesn't matter if you have u, so if we pick phi to be divergence free, we get rid of phi, so let's do that. So in particular, let's say IE if phi is divergence free, then the integral of u dot dt phi plus the integral of u tensor u dotted with grad phi is equal to zero. And so that's gonna be our definition for a weak solution. So we say that u, okay, so what is the, what is a nice, like a simple requirement on u to make sure that this is well defined? Well, we have a u, like essentially squared here, right? Like a, yeah, bilinear thing in u, so if u is in L2, then this thing is well defined. If u is square integrable, then this thing that looks like a square is well defined. So u and L2 on compact sets is a weak solution of Euler if star holds for any test function phi that is smooth, compact supported, compactly supported, really meaning compactly supported in time for any, I forgot, divergence free. So that's what we mean by a weak solution. Does that make sense to non-PDE people? For those that are still awake. Okay, good. So remember here, so these things, these weak solutions are up a little bit of place because it's easier to find weak solutions than strong solutions, right? That's the typical way that I'll see them first, but in this case, we really needed this notion because we're expecting something to go wrong for sufficiently irregular solution, something to sort of go wrong from really the physical interpretation of this model. So there's sort of a, this is a nice, I like the situation because it's sort of a physical motivation to really looking for weak solutions. This is not just a technical tool that helps us just mathematically finding some solution in some abstract space somewhere. And remember in particular, we wanna answer Onsager's question about what space is critical for energy to be conserved. He conjectured it to be C013 and turns out he was right as some people proved very recently. So let's do a quick light review of these cool results. Sort of like historical snapshot of proving Onsager right. So remember, Onsager conjectured this in 1949, so quite a while back. And in one direction, showing that sufficiently smooth solutions do conserve the energy that was proved a while ago in 94, but it still took a while. Although, well, okay, that's not entirely fair. I think it was probably proved before that, but these guys have a really slick proof that's like four pages long. It's really straightforward. They do hide a bazillion details, but it's very readable. It's a cute paper. So this is Constantine and TT. And so they prove that U in C0 alpha, alpha is strictly bigger than 1 third, then energy is conserved. So this is sort of the easier direction that happened for their while back. But then came, okay, so this paper from 07, this is not, they didn't prove the, well, okay, I'll tell you what they did and then put it in context, I guess. So these two guys, Delaylis and Ziklihidi, they proved what? So they did something that was pretty remarkable, which is that there exists some weak solutions, a very weak, just L infinity, weak solution. Okay, but nothing, this is not where it's interesting. The interesting part is that, let's say with space-time compact support. So let's think about what this means physically. They weren't the first to do it. Schlurman did it 10 years before, but their method was sort of like generalized so nicely that then they proved these results very far and that's where we got the really cool stuff from very recently and they're still pushing it and I'll get to that at the very end. But so let me think about what this means physically, right? So this is saying that you put your glass of water, cup of water, bottle of water, whatever, your container fluid, you leave it overnight, you leave it at rest. This is not even a good example, but you leave it at rest. And then spontaneously it starts moving and doing all sorts of crazy things and then it just stops again. And that's what it means to be compact and support in space-time, right? Like at time zero it doesn't move, at time T is a vision large, it doesn't move and in between it does all sorts of crazy stuff. Well, say more, so when I was saying that their method was so powerful, was like generalizable and nice, it's been pushed to sort of say how crazy these things can be and I'll say more about that in a second. But this is already pretty cool. You get absolutely non-physical weak solutions. But now the next natural question was how far can we push this? This is you bounded, this is very far from C0 1 third. So then they push it and that took about 10 years, which takes us to the last couple of years. Like the next couple of results, these guys proved this when they were post-docs. It's freaking incredible. They're pretty good. So in 2016, I think 1S. I said prove that when alpha is strictly less than 1 third, or it's right in a similar way. So there exists U in C0 alpha, or ah, no, I gotta get the quantifiers right. People in concepts will be mad. For alpha and 1 third, there is a U in C0 alpha such that what? Well, such that I'm compactly supported in space time, in space time. I'll say a bit more about sort of, okay, about how this was done. The sort of the tool that they used that then generalized nicely and allowed people to then push it all the way up to here is what's known as convex integration. I think who are like Gromov and Nash had to do with that to start it off. I'll say more about very, very vaguely how that works. But so he really pushed it up to the critical space. And then the thing that's even more remarkable is so all these lovely people, like these two guys, Lillis, Zicca Lahiti, Buckmaster, who was a student of the first two, I think, and V-call. So what this showed was, so sort of a refinement of this, that for any alpha less than 1 third, for any energy profile, so from zero T to strictly positive numbers, there is a solution. There is your C zero alpha weak solution such that its kinetic energy is exactly as was prescribed. So half U squared is equal to E of T, for all time in your time interval. So this is, to go back to the borrow example, you tell me, I think they can fix T as well. You tell me in that amount of time, all my energy to do all sorts of crazy things, they can find a weak solution that does that. Whatever the energy profile is, you can make it go up and down and do whatever the hell they want. I mean, it has to be smooth, but fine. The solution isn't, that's the whole point. So again, they can get this. So, and they even show something about, they have like a bare category stop showing up that is interpreted as saying that the, like disability solutions, like at that regularity level, disability solutions, solutions that don't conserve energy are sort of typical. They're sort of like everywhere it is. Their typical solution doesn't even conserve energy. So these are really great results. And this is just, this is weird, but yeah, it tells you how bad things go as soon as you go below this critical level. Let's see, okay. So, now, well, I should take a pause now to make sure. Is there any questions about what these results mean or like what's going on here? Yeah. Does that mean the equation is wrong? It means that, it means that our notion of weak solution obviously is not physically meaningful, right? Or, well, that's not, maybe I shouldn't be so categorical. What they do say is that at this low regularity, at this such low level of regularity, this has to do now with turbulence and these sorts of things. I don't know enough about that to give more precise details about what they mean, how did the connection between this and turbulence. But if there is physical relevance, that's where it is. It has to do with turbulence. That's what I'm told. And that's what they say in all these papers every time. It means, you know, right, I didn't say it, but for smooth initial data, you have smooth solutions for short time, right? So if your data starts nice and everything's nice, like it's, I guess what I want to say is that this is about the interplay of the weak solutions and the equation, right? It's really like saying something about the notion of weak solutions, not just the PDE. And also sort of go back to the way I was motivating or not motivating, but when we're talking about weak solution, I was saying, you know, another C that usually the first time is sort of a technical tool to find solutions easily and stuff, but the point is as this clearly shows, you know, it matters a quite a big deal what solutions base you're looking at, right? Looking at it because also some crazy things happen. So, yeah, the physical buzzword for the relevance of this is turbulence, but I don't know much more. Yeah. I don't understand why do we consider a whole different solution when we're considering a weak solution because like these two things, why do you think they don't go together, right? Well, what, what do you, so they don't go together? I mean... I mean, I'm going to say a weak solution if we have some sort of memory, we don't consider... Okay, fine. If you want to increase in terms of sort of things, these guys don't prove it for this. They prove it for some base of space that's like B3 infinity alpha, I think, because you want it to be like L3 integrable with like alpha derivatives and L3 type of thing. And this one is like kind of like the second index for learning spaces stuff, because they do this with literal payload. This is like how you sum your things. The point is they prove this in something that looks more like a double-edged space, right? Because that's what these guys are. So look, you have L2, you have C infinity, you want to find what it stops in between. They do prove it there because of course that scale is finer than the Helder space's scale. So they do prove it there and prove a strictly sharper result. So I guess you are right, you don't have to stop at Helder, you can do finer scales. Like finer, you know, yeah, finer refinements of like regularity and stuff. Does that answer your question? Yeah. Yeah. Yeah, exactly. You can say you can say a bunch of stuff about, you know, you could like, yeah, you know, I guess there's like, you could play around with your integrability and regularity and these and stuff like that, I guess. Yeah, you're gonna need a lot of derivatives to get into, yeah. So point is, this is getting too specific to analysis, but this is of course, way of measuring regularity. They do prove it for a finer thing, yeah. Any other questions about what those rules are? Yeah. So what do you mean by a, so... So basically like, I'm looking for a partial converse of six-threads out in proven house, yeah. So you're saying that given any energy profile for branch motion, right? So I'm gonna say, okay, I gave you some by really rough solution and then that satisfies the whole equation. And then what do you wanna do with it? I said I'll give you something that looks like Brownian motion also. Okay. Crazy stuff. Yeah, that's a solution to order and then what do you do with it? Can, can, so, can, maybe I should say something. Can someone look like Brownian motion? It depends what you mean by something that looks like a Brownian motion. If you pick a Brownian motion to be like, what are they called? The fractional ones, I don't know, Hurston DC's things is that what they call these and you pick the one that matches to like this regularity. Yeah, you can get like, I don't know, you can get, you know, helder, you know, whatever one, that's one one-third and if, you get solutions that are just as regular as some fractional brain motions, I guess. Yeah, sure. We can talk more about this at the end. Like, after what did you want? Any other questions? So, I guess, let's just give a very, very quick sketch of this result where we'll see that this very rough heuristic that you split the derivative along the three terms, give you one-third. We'll see kind of how that shows up here. And then I'll give a very, very, very rough sketch of the convex integration stuff that goes behind these just as sort of a very hand-wavy argument. Oh, sorry, well, one very quick question. So, this result shows that, okay, when our first less than one-third that the result doesn't hold and the energy is not conserved. So, what happens when it's conserved? When it equal to a third, right? Equal. Yeah, yeah. When it's above, it's conserved, right? When it's above, it's conserved. At one-third, I think this guy has a paper on it. I don't, I don't remember what it is, but yeah, if you look him up, he's got something about what happens at one-third. But I don't remember what the top of my head, yeah. Yeah, I mean, it really goes from above it's conserved and below it's like, it feels to be conserved in an arbitrary way, basically. So, okay, so very quick sketch of this thing. So, big sketch of this paper from 94. So, what they're gonna do is basically look at the weak formulation, test it again sort of like a smoothed out version, like a modified smoothed out version of the solution tested against that and to give our parts a bunch of time and then to get a remainder that have to control that it can send to zero precisely when alpha is less than, bigger than one-third. So, let's sort of just sketch that. So, we're gonna take our test function from our weak formulation to be, so if you modify your solution twice, whereby modifying, we mean that we convolve, like, yeah, we just take a running average of convolution with, right, so any function f, f of epsilon, I just take sort of a running average at scale epsilon, that's what this means. But more precisely, phi epsilon is just a rescaled version of some reference phi. Thing of this phi is like some nice symmetric, like probability distribution, sort of like compactly supported Gaussian type thing, something that averages out to one. So, smooth, let's say again on the torus, well, yeah, yeah, we're gonna, and then it's between zero and one, the integral is one, and it's symmetric, also known as a standard modifier. There are also some nice properties. We'll use a bunch of them, but I'm not gonna write them down because we're gonna skip those steps anyway. But just to give you a rough sketch, one thing that's probably worth noting is that, so if I have f and g, and I'm taking a running average of g, this is the same thing as taking the running average over f and then of g, because sort of averages of averages, you can just move them around. So this is sort of why you have this average, like this modification twice, because then you move it on to the other guy, and so you get, remember this guy, so you get like u acting on something with phi, so then if you have this guy modified twice, you move one of those on u, and so you get u epsilon twice. So once you have that, you plug that into your ring formulation, and you get that integral over torus of your kinetic energy of the modification at time t minus the kinetic energy of the modification at time zero, is equal to this remainder, so integral from zero t and over torus of, right, so this is just one of the terms in the ring formulation, and this one we already integrated because remember here, like this guys came from, we had some, we had what, we had u acting on dt, u epsilon, epsilon, so you move this on this guy, you take the dt outside and you get this, that's what comes out. This should be capital T probably, the same as over there anyways, and so this is not the term you have to control. It makes sense that it's this one because it's a non-linear stuff that always messes you up, so yeah, let's just rewrite this as you move the modification, like one of those on the bilinear term, and then I'm gonna wave my hands a lot, but the details are in their paper, and if you play around with convolutions and modification enough, they're pretty straightforward, but the key thing is that this star term, so you want to send it to zero, when epsilon goes to zero, right, because then this is equal to that, so energy is conserved at the modification level, the modification is super nice, you can always pass it the limit, so if it's conserved there, it's conserved in the limit, so I just need to show that this guy goes to zero as epsilon goes to zero, and why does it, well, you just sort of see how epsilon interacts with the regularity of u, so u epsilon is a running average at scale epsilon, so when I have a u epsilon showing up, I'm gonna have an epsilon to the alpha coming out, because that's how u on the scale epsilon, if it's in c zero alpha, is gonna change like epsilon alpha, that's the whole point, you fix this power law of the ratio of u to the domain, so every time I hate one of these guys, I get an epsilon to the alpha, so I get epsilon to the alpha, epsilon to the alpha from these two guys, I mean, this doesn't come from there, you split this up in a bunch of terms that you can handle, but they all give you these things, and then this guy, well, I get another epsilon to the alpha, but I also have a derivative, derivative of this, and I'm gonna end up hitting that guy, taking epsilon outside, epsilon to the minus one, that's where the minus one comes from the derivative, so it really is three u's and one derivative, and so, okay, and then you have something that's like a u to the c zero alpha type thing, this is like the best of norm types I was gonna show up, whatever norm you're showing up, I'm gonna put this in quotes, the right functional norm of u shows up. So then this, which is epsilon to the three alpha minus one, goes to zero, as epsilon goes to zero, when alpha is bigger than one third. So when alpha is bigger than one third, you can just pass it a limit and send this away, and even in the proof, they say, even the way they didn't do the proof, they say that it has something to do with turbulence because you're looking at sort of averages, and apparently lots of ways to sort of analyze turbulence rigorously is to look at statistical properties, so apparently there's some deeper relation even in the way you prove this, which I guess gives you sort of like, what I'm trying to say is that modification is the standard tool for PD, but also in that case, it's telling you something about averages, which apparently is relevant to the physical picture here, but again, I don't know much more than that. This is sort of just paraphrasing what they say in the paper. So this is a rough sketch of, when things are regular enough, energy is conserved, and now for a very rough sketch of the other direction, well, not the other direction, but on the other side of the critical space, when things are not regular enough, you get all sorts of wild things, and how do you build these wild things? You build them with convex integration, which I don't really know why it's called convex integration, but I can give you a very rough sketch of what they do. So a very big sketch of, well, convex integration, which is what they use in all those recent papers, sort of starting in 2007 with that big paper by DeLillies and Schiller-Zickler-Hiedi. They didn't start that, that doesn't start in 2007. That was used by, yeah, Gromov Nash for like fancy things a while back, but this is sort of the powerful tool that allows them to push these results. Even using this for Navier-Stokes now, it's really impressive what they're doing. But sort of the very, very rough sketch is that they reformulate the PDE. So the PDE we have is this. Okay, I'm writing it slightly differently, but if you push this inside, you get a D view, which goes away because it's zero, and then you hit that guy and you get the term you had before. And you have divergence that equals to zero as well, but just for clarity, we're gonna suppress that for now. So you write this in a different, you sort of change variables basically. You solve instead DTU plus, sorry, divergence of S plus grad Q equals to zero. And this is gonna sound almost, like this is too simplistic, but they start from a nonlinear PDE and they get a linear one with a nonlinear change of variable. It sounds like nothing, but then they do some, what allows them? So yeah, so this is, blah, minus blah, where you need, so yeah, they get these point-wise constraints. These are not PDEs anymore, they're just constraints. So you start, you write a nonlinear PDE as a linear PDE subject to nonlinear constraints. And then what allows them to do here is that they have a linear PDE. So they can do all sorts of Fourier stuff and can just sort of add oscillations as much as they want. And what happens is that they'll add oscillations to things that don't satisfy this, but satisfy the inequality version and sort of push them all the way to the boundary of that set there up to equality basically. Very, very, very roughly speaking. But one thing that's interesting is that, for example, this is where I know that they really rely on 3D because they need to, so they're gonna add sort of like plane wave type solutions that are supported on lines. You can't really fit a lot of lines in 2D that don't overlap, but you can do that in 3D. And you call these things micado flows, I guess, because that's what micado sticks look like. So point is it's like some interesting geometry to the 3D style that you can do there. And yeah, I think I've been speaking four way too long, so that's it. What happened? Yeah? So why is the Onsager's idea for putting that alpha bigger than one third? Where, how do you come up with one third? Yeah, I mean like for the reason we can tell, but I mean Onsager didn't come up with that idea, right? Yeah, so he had some different argument about more sort of Fourier series based stuff and sort of, I think that, yeah, he was doing, he was doing, yeah, so I mean everything, like he already knew, I mean everyone knew that if any term was gonna do you trouble, it was that one, like that was guaranteed. And so you look at what happens here on the Fourier side and you end up with some power law in like the, how the frequencies, how the amplitudes have to behave at different frequencies and sort of how, where is it, how sort of energy has to shift from different Fourier frequencies to another and you get some sort of power law where that one third comes up. That's sort of the derivation he has. There's a quick, in that paper, about Constantine, Urban TT, they mentioned sort of the statistical analysis type of idea that he's doing on the Fourier side, sort of to, that's where the one third comes from. But there was some economic argument that their formulation of the Onsager conjecture in terms of helder spaces is sort of a path-wise version of what Onsager did on sort of a statistical analysis type thing. Again, I'm mostly power phrasing and I don't know so much more about Onsager did exactly, but it was something to do with frequencies and Fourier stuff. Yeah. Also, it used to be like the, they did prove that in some best of spaces that weren't sort of widely used back in the 50s, I think. So there's also sort of the technology wasn't quite there, I think, but perhaps. Yeah. Yeah, so Navistokes, so Navistokes is the same as Euler, but there's, thanks, Giovanni, there's a minus and a plus in U here, which is really nice and regularizes everything. So like this PD should be a lot nicer, but they're still pushing the same convex integration techniques to get horrible solutions to this or like non-unique and do crazy things and all that. So this is very recent. They had like a paper like August last year, like earlier this year or something, like the original name like Buckmaster and Vick Hall have some paper about, yeah, like these crazy solutions to Navistokes. They're not quite the Ray solutions, they're weaker than that, which is why everyone is not panicking yet, but they're pretty close and it's pretty exciting. So that's why, yeah, that's another demonstration that these convex integration techniques are pretty damn powerful because they're pushing them all the way up to Navistokes. The physics behind why this is nicer is that this is viscosity, so it's telling you that the fluid sort of rubs off on, like neighboring fluid particle were off in each other, so they kind of drag each other along, so there's not sort of sharp gradients in the velocity that happen because of this sort of, that's hand waving the physical meaning behind why this is, these solutions are nicer than before there. Any other questions? Yeah. Is people try to look at some other energy instead? Well, like other conserved quantities, or what do you mean? There's some sort of a gradient approach to that. I don't know, what do you mean? Like what? Not necessarily different. Yeah, but you, well, look I mean, part of the problem with Euler and Navistokes, yeah, that's in the same, yeah. Something is conserved from your naming as energy. Like part of the, part of the, so Kevin. No, you know what, Sora said, if we look at heat equations, integral of, let's say the mass is conserved, but you can look at the heat equation, you can impose another energy that is increasing. Yeah, so Kevin, the part of the issue with why Euler and Navistokes, the equations have resisted like so long is that we don't know many conserved quantities or sort of monotically decreasing quantities. We don't have that many of them, right? Like when they find, the last guy who came to a colloquium, he was at speed, Alexi Vasell, right? He had this whole like Vasell, the entropy for some compressible Navistokes stuff. This is the fact that they have like some, not even conserved quantity, but quantity that decreases the entropy allows them to get all sorts of control and stuff. Like the point is it's really hard to find conserved quantities or quantities that are a monotone. And if you, when you do, you get a lot of stuff out of it. And so people, I guess people are trying to do that, find conserved quantities and quantities are a monotone. It's just very hard. But yes, once you do find one, you get all sorts of great stuff with it. Yeah. Any other questions? Let's thank them.