 Yeah, so let me recall what we discussed last time. So last time I looked at the so-called inviscid limit. And so the idea was to start from Navier-Stokes with a viscosity mu. So basically, we had an equation of this type. Divergence free condition and u at time 0 is some u0 of x. And the important thing was the boundary condition. And so if I just simplify the problem and look at the problem in the half space, for instance. So my omega, let's say my omega will be r plus 3. So r3 plus, so just the upper half space. Then this will be the natural problem, the Dirichlet boundary condition. And so I explained last time that understanding this limit when mu goes to 0 is an open problem with this boundary condition. So it's an open problem to prove. So if I call this solution u mu, because they depend on the viscosity mu, so it's an open problem to understand the limit of u mu when u goes to 0. Even in the interior of the domain. So the reason is that the idea is to say, OK, I have energy estimates. You can say, OK, I have energy estimate that tells me that u of t l2 square plus the integral of the viscosity. Let's say less than a constant coming from the initial data. This will be the initial data. So this is the uniform bound you have. Now this uniform bound, if you want to use compactness method, if you want to use some compactness method, all you can deduce is that by compactness, all you can say is that u mu will converge weekly to some u0 when u goes to 0. Let's say in l2 l2, because you have this bound. I mean locally in time. Locally in time. It's not a problem of long time. I mean, you just take small time. But then if you have this weak convergence, you cannot pass to the limit in the nonlinear term. It's not a question about being in the interior or not in the interior. Now, your question about the interior is related to the fact to say, oh, let me try to prove higher regularity. If I have higher regularity, then maybe I can do something. But now, if you try to prove higher regularity for fixed viscosity, yes. I mean, for Navier stocks, we know how to propagate higher regularity. Now, if you are in three dimensions, and you want to propagate higher regularity, maybe the time of existence will depend on the viscosity and can shrink with the viscosity. If you are in two dimensions, you can prove global existence for Navier stocks, even with higher regularity. But you don't have uniform bounds. Meaning that this is the energy. So this is energy. So the energy gives me something which is uniform. So if you try to prove higher regularity, if you try to prove higher regularity, I mean, you can. But then depending on which dimension, if you are in three dimensions, you will only be able to prove it for higher regularity on a time, higher regularity on or in some interval 0t nu. So you can prove higher regularity on an interval like this. But then the interval can shrink in three dimensions. But that's not really the issue. Because even in 2D, we can prove higher regularity, same thing. But let's say if I want to prove that, if I want to prove some bound in some HS space on some fixed interval, so I say I can fix my time 1, for instance. I can put the 1. And this is something you can prove. But then you get a constant that depends on nu. This you can do this. But then this is useless to do any compactness method. So the only way you can try to prove something is by trying to find the corrector or OK. So now let's try to let me, any more questions about this? So basically, before I go to Prentall, I want to mention few, I would say, regularizing effects that will allow you to prove that the limit takes place. So this is open, as I mentioned. But so I will call other favorable situations. So there are situations where things are favorable. And last time we mentioned one of them. One of them is if I replace nu Laplacian by nu with imposing that this also goes to 0. So this will be a good situation. The other good situation is if I replace the boundary condition by Navier. I didn't really go into explaining it precisely. But this actually turns out to be a favorable condition. So another case is so-called rotating fluids. I'll explain it. The fourth case is some MHD models. All these are cases where, OK, they can be somehow some regularizing effects. These are, I will call them, more like regularizing effect that can, in a sense, change a little bit the type of boundary layer equation that we have, so they can help. The case of rotating fluids, for instance, is the case you take, you add here a term like this. Let's say, so since I am in the half space, you put E3. So that's the cross product, OK? So a term like that, if you add it to your equation, the energy is the same, because this is a term that just makes things turn. So it doesn't change anything to the energy. But somehow, instead of getting the print of the energy, getting the parental boundary layer that I'm going to have to explain, it gives you some other type of boundary layer, which is like a so-called Echman boundary layer. And it turns out, in that case, you can perform the limits. Of course, now, the limit will not be just Euler, it will be actually, in this case, it will be 2D Euler. Because that term somehow will kill the Z dependence. Usually, when we consider this problem, the problem would be epsilon going to 0 and 0. Yeah, yeah, yeah, yeah. So there is a theorem. There's a theorem. It's not epsilon. Yeah, yeah, epsilon and nu. Yeah, yeah. So actually, I have a theorem from 98, exactly, also, where epsilon is equal to nu. You take, in that problem, exactly, you take epsilon equal to nu. Usually, when we look at that problem, we look at it between two plates. So the domain omega will be the torus times 0, 1, let's say. So it's periodic in these directions, in x and y. And in z, it is between 0 and 1. You take this. Then you can study the limit. Then you can study the limit. And somehow, you prove that you converge to, you prove that you converge to Euler. Actually, it's not exactly Euler. It's Euler with some small damping. So there is a damping coming from a boundary layer. So here, the convergence takes place. But at least the convergence takes place. Like, u nu, you can prove that u nu will converge to nu 0. u 0 solves 2D Euler with damping. But again, this rotation is a favorable effect that, in a sense, we call that stabilizes the boundary layer. In that case, it changes actually the boundary layer. Because I'm going now to explain the derivation of the parental boundary layer. But when you add this term, somehow the boundary layer changes. The main terms in your construction change. So that's one good effect. The other good effect, for instance, is some MHD models. I'm not going. I was planning to talk a little bit about it. But maybe I will not. But you can couple your Navier-Stokes equation with an equation for the magnetic field. There are many models for that. And depending on what type of boundary condition you put, the coupling has some regularizing effect. It can stabilize the boundary layer and so on. I mean, there is a whole literature about MHD models. OK, so let's now discuss a little bit the derivation of parental. So now I'll talk about derivation of parental. So OK, I mean, this is a little bit related to your question of wrong. So I mean, you look at this equation. So now let's forget about the rotation. I mean, a natural thing to say is to say, OK, the viscosity should not be important in the interior. So here we should have Euler. And maybe there is a small region here where viscous effect are important, right? This makes a lot of sense to think that. And here in this region, you will try to match the data of Euler to 0, right? This region will try to match the data. So you have the data of Euler here. You want to match this data of Euler to 0. So what should we do here? So the question is, what is the size of this? So of course, now if you remember what we said about Cato, Cato says that if in a region of size new, there's no dissipation, then things are OK. Actually, the size here, the way you should think about it, is you say, OK, I want this term to be, let's say, of the same size as this, and also same size as this. So then the natural size is square root of mu, right? So the natural size here is square root of mu, OK? So then, OK, so I'm going to make, so I start with my variables x, y. And then I'm going to make the change of variable. I keep x, but I introduce capital X, small x. And capital Y will be small y over square root of y. Oh, square root of mu. Yeah, let's do it in 2D. I mean, the derivation is the same, but we'll do it in 2D, OK? So now my velocity, uv, OK, so my u mu, I will write it the vector u and u. I write it as two components, u and v. I will call, I will take capital U. Capital U of will be my small u of x, y. And then v. So what should I write here? So remember that I have the divergence-free condition, OK? So the divergence-free condition tells me that ux plus vy equals 0, like for Navier-Stokes, for, I would like to keep this. I would like that u capital X plus v capital Y equals 0, right? These are derivatives. This is the derivative of u. I want this to be kept, OK? So then what should I put for capital V? So I want this to be, but I need to multiply it by something here in front. 1 over? No, no, no, are you sure that this one over is reversed? I'll always make that, so OK. We have to be kept here. I think, yeah, that's the thing, OK. We'll check. OK, now, OK, now we take our equation and we just write down what we get. So we get dt capital U. Now, the u grad u, so let me write it like this. So u grad u, so this will be normally u dx u plus v dy u. OK, so how does this change? This becomes capital U v capital S u plus the v. The v is what is square root of nu capital V. And the dy, the dy is 1 over square root of nu t capital Y u. So we always keep in mind that d over dy makes sense. OK, so it's exactly u dx u plus v dy u, OK? So now the viscosity, I will keep minus nu d2x minus d2 capital Y plus the pressure d capital X of t. So this is the equation, right? Now, the pressure, OK, let me keep it like this, right? Makes sense? OK, now let's write down the equation for the second component. So I get dt, the v becomes square root of nu capital V. I mean, then it's the same. The terms here are the same, OK? That's what we get. OK, now what we do, we do like asymptotics. So at leading order, the second equation becomes what? At leading order, the second equation, this is the main term because this has a square root of nu in front. So formally, this will tell us that d capital Y p is 0. OK, so then actually this equation, that's all what we get from that equation. So I can just remove these terms. And that's it. OK, what do I get finally? I get this equation with this guy that only depends on x, doesn't depend on p, and I have this equation. Now, of course, the v, how do I recover v? I recover v from the incompressibility condition, right? So the other thing is that v and u both vanish at the boundary. Both u and v vanish at the boundary. So v is just the integral between 0 and capital Y of minus ux dy prime. So as of now, I didn't talk about Euler. As of now, I didn't talk about Euler. I just wrote the leading order equation here. That's all I did. And I dropped that time derivative in v. You don't drop the last one. Oh, this one? Yeah, this one, we drop it also. That's, yeah, this one, I drop it. So I still need one thing. I still need to understand what happens when capital Y goes to infinity, right? So now capital Y goes to infinity. Actually, it seems a little bit counterintuitive, but capital Y goes to infinity. That normally corresponds to small y equals 0. It corresponds to the boundary effect of Euler, right? So here, so this is the size of my layer. Capital Y goes to infinity is the top of the layer, but that corresponds to what happened to Euler at 0, OK? So then that's what we do. Let's look at the limit. Capital Y goes to infinity here. So we say that u, I want this to go to the solution of Euler. So that's really the matching asymptotics. That's the matching. OK, so that's all. And actually, so now this in a sense determines what should be the pressure, OK? So this determines what should be the pressure, because the pressure we should have here actually should correspond to the pressure of Euler, OK? So the pressure we have here should correspond to the pressure of Euler. OK, well, maybe I don't need to simplify it. But so basically, the way you can compute the pressure is that you take this guy, and you say that that will be your equation for Euler. These terms, they drop. And what you end up with is the fact that if I keep the, I mean, this corresponds to taking the equation and sending Y to infinity, right? I can send Y to infinity in this equation. And that's the equation that you get for the pressure. I assume that when you send Y to infinity, you have some kind of limit at the formula level, right? And you are just taking the limit. Yeah, yes. I mean, the whole thing is really based on the idea of, I mean, this formula idea about matching asymptotics. So somehow, you are saying that, I mean, in a sense, you are saying that if you take a region much bigger than square root of nu. So if you take a region much bigger than square root of nu, but still small, that region will correspond. So if I take a region like nu to the power 1 quarter, right? So if I take a region nu to the power 1 quarter. Here, small y will be like nu to the 1 quarter. Big y will be nu to the minus 1 quarter, right? So the limit capital Y going to infinity will still correspond to small y going to 0 in Euler. And then you match. You say, OK, this solution should correspond to the Euler solution. And then the Euler solution, the Euler solution, you can just take the trade. I mean, there are two ways of trying to determine the pressure. Either you say, OK, let me take the trace of the Euler solution, right? OK, so this is actually the first component of the Euler solution. Normally, the Euler solution has also a VE. But the VE vanishes at the boundary, right? So if I take my Euler solution. And this is fine, right? Yeah, yeah, yeah. So basically, usually when we look at Prandtl, we even simplify more because most of the times will take this to be a constant, right? So many times we call constant or just depends on x. Either constant or just depends on x. Usually, we try to draw this time dependence here. OK, so this is the equation, right? So I should add to it, for instance. How does the pressure is constant? If you take the constant van at the end, the pressure is also constant, right? Yeah, yeah, OK, yeah, right. So constant, right. So if I take, so there are different cases. I can take U Euler to be constant depending on any. I mean U Euler of Tx0 to be a constant. So in which case, there's no pressure. The pressure will be 0. Or some other cases are, let's say, some linear dependence or some just U E of x. In which case, you get the pressure, right? But somehow, the difficulties are the same. So if you want to simplify things to say, OK, let me take it to be 0. And then it simplifies. So you don't have even the pressure. So this is the derivation. So I said, OK, we add to it U goes and y goes to infinity to, OK, let's write it like this. So we have, and this I can forget, because for me, the pressure is just a function of x. So OK, so I have this equation. With also U equal to 0 at y equals 0. So I have U at 0. I have U at infinity. I have V that I compute from the incompressibility. So this is how I compute V. And I have this equation on U. So I mean, when you look at it, I mean, it seems it's a scalar equation. I mean, you can think that, OK, I mean, this should be a good equation. So what are the difficulties? What's the problem with this? So the main problem is actually coming from this derivative. This derivative is a loss of derivative. It's a loss. There's a loss of the derivative. So this term has a loss of derivative. Of course, now it's similar to this guy. This guy also has a loss of derivative. But this term is a transport. This is transport. So we know how to deal with transport. When you have a loss of derivative in a transport term, that's a good term. I mean, you do energy. High order energy estimates will still work. This one, not. High order energy estimate will not work, OK? So now energy. So yeah, so let me forget about the pressure first. Let's forget about this. Just think that this is a constant. Not necessarily 0, but a constant, let's say 1. So then you can multiply by u, integrate by parts. And you can get the fact that the integral q square over 2 plus, if you multiply by u, integrate by parts, this term will disappear. So you can even get an equation of this type. So here I'm integrating in x, y equals 0, right? And then I can integrate. So I can get some energy, right? OK, you assume that the constant, OK, because here? Yeah, I mean like, OK, I mean, yeah, so there's something wrong with what I'm writing here. There's something wrong. I mean, what I am writing will work if this constant is 0. And if I take, but just very formal. Yeah, so assume here the constant is 0. If the constant is not 0, OK, you have to subtract something. But then you have to subtract something that corresponds to what happens at infinity. But basically, you can prove something like that. Now, if you want to go to higher regularity, it turns out that there is a quantity which is good, which is when you take a derivative in y. If I take this equation and differentiate it in y, I can introduce what we call omega, the vorticity. So then it will satisfy the same equation. And actually, omega will go to 0 at infinity because this has a limit, so the derivative will go to 0. So you can write down also something similar. But the problem is x derivatives. So y derivatives are fine. The problem is that now if I want to take an x derivative, what happens if I want to do an x derivative of this of the equation? So if I take an x derivative, I get the following equation. So these are the good terms, but I have plus. So this term is fine. I mean, it's fixed. It's coming from the Euler equations, not the problem. This is transport. This term is OK. It doesn't lose derivatives, but it's this guy that loses derivatives. This term loses derivatives because you have vx. vx is like two derivatives on u. So this term is like dy minus 1 of uxx. And OK, you don't know what to do with it. You don't know what to do with it. It loses a derivative, right? It loses a derivative. So if you are losing derivatives, it seems the only way you can solve is by using Koshy Kowalewski and taking analytic initial data. I mean, if you have situations where you have, I mean, this is similar to trying to solve, for instance, if you try to solve an equation like this, dtu plus some function of ux equals 0. I mean, if you have equations of this type, usually you can solve them in analytic regularity, right? I mean, just think about this, for instance, minus du equals 0, right? If you take an equation like this, d is like a derivative. An equation like this, you can always only solve in analytic regularity, right? And yeah, so then it's true. Like a parenthole, we can solve it in analytic regularity. And there are many papers dealing with that. And one of the main works on this is a work by Kafeleish and San Martino, round 98, who studied the problem in analytic regularity. They also studied the inviscid limit. They also do the inviscid limit in analytic regularity, and they just even justify the boundary layer. So it's a quite important work plus inviscid limit in analytic regularity. In their work, they are assuming analyticity in both x and y. They are assuming analyticity in both x and y. But from the formal argument I'm showing here, I mean, we are only losing in x. So it's not really necessary that one puts analyticity in y. And actually that was removed later on. So there are works of San Martino with some other collaborators and also more recent some work of Vlad Vikal with Igor also studying the Prandtl only with analytic regularity in x. So it's analytic regularity in x, but so-called f in y. But we cannot do the inviscid limit in that case. So here analytic in x and so-called f in y. So somehow like these results, what are the difficulties? What are the problems? The problem is more or less how you deal with the decay in y. I mean, we understand more or less what's happening in x. OK, you have a loss of derivative. So if you put analyticity in x, you are fine. But then you have to put the right weight in y. The right weight in y. That's really the difficulties here because you see even your v, even the v is not clear what the behavior of v when y goes to infinity. So the v is not bounded when y goes to infinity. So I mean, you have to design spaces the right way, OK? So after you put analyticity, the problem is in y, I will say it's not really a problem of regularity, but more a problem in terms of decay. How you go to capital, how your u approaches this when y goes to infinity. OK, so that's one piece, that one type of works. Now, it turns out that here is the only thing I don't see clearly. And I want to in the Catholic San Martinos, you said that there is an LBC limit. So what is the boundary condition of earlier? Because for the moment, I don't see. It doesn't depend on the solution. So it's like a still and open anode. No, OK. So San Martino and I mean, this is like the strongest result in terms of justifying the inviscid limit. So the boundary condition is an anode. No, OK. So I can explain to you this. So you take Navier-Stokes in this half space. Now, you have Navier-Stokes. You will solve Navier-Stokes in some analytic regularity. And then you look at Euler. So Navier-Stokes, you can impose u nu equals 0 on the boundary. Euler, you impose v Euler equals 0 on the boundary. But u Euler is free. My solution, I write it like this. This is the notation I'm using, OK? So then I solve Euler. I solve Euler, I get my u Euler. So now I know what is u Euler. Now I come here, and I solve Prentor with that guy. So then I have my solution of Prentor. Then what they are able to do, which is really like very strong result, is that they are able to prove that this guy minus, OK, so. They justify the decomposition. They really tell you, they are able to tell you that Navier-Stokes, the solution of Navier-Stokes behaves like Euler here, so this is my Navier-Stokes, behaves like Prentor here. Yeah, it's OK. I do this one. Yeah, so that's their result here. Of course, now in July I'll speak about justifying the inviscid limit with Prentor. So their result is really like the strongest result in terms of conditions, and also the strongest in terms of conclusion. Later on, we started solving Prentor in different spaces, less regular, with monotonicity, with this, with that, and we, so then from those results we want also to get inviscid limit, but then we get, OK, we get slightly weaker in terms of conclusion, which is normal. Now there is another result by Olenik in the 60s, I think 62 maybe. She proves that under monotonicity condition, so this is my system P, P is well-posed. So what does that mean? So you have to assume that U is monotone in Y. So let's say you can assume this condition. For instance, I can put here a positive constant. I mean, just to fix ideas, I can put this to be 1. I can look at this problem. And I assume my initial data, so maybe let me write it down like this. I can take this problem with I didn't put the initial data, some U0, OK? So I'm taking Prandtl with some initial data, at 0 I am at 0, at infinity I am at 1. And what Olenik does, she assumes that U0 is dy of U0 is positive, OK? And then she proves local existence. OK, so here I'm not going into detail about what type of regularity. What she does, she uses some holder type regularity. So it's OK, maybe three, four derivatives to make it work, plus holder regularity. Now, what's the idea of her proof? I'm not going to give a lot of detail. I mean, the whole idea here is to use a transformation, which is called croco-transform. So the croco-transform, so due to the fact that dyu is positive, OK, so for short time you will keep it. For short time you keep it. So the idea is to say, instead of looking at using x, y as my variables, I will write the equation in terms of xu. Now, so then instead of having my domain, which is the upper half space, my domain here will be, right, because u goes from 0 to 1. u goes from 0 to 1. And then you can make a change of coordinate. And what she end up with, she end up with some sort of degenerate parabolic equation but without non-local terms. So the point in making this transformation is that you get rid of the non-local term. I'm not going to explain this because more recently we have, and for a long time, this was the only way we can prove her result is that you do the croco-transform. The issue about the croco-transform is that it's not very suitable. If you want to look at inviscid limits, for instance, if you have in mind inviscid limit, it's not very suitable because it's written in a strange set of variables. So for a period of time it was an interesting question to try to see if one can prove local existence for parental, for monotone data, but without the croco-transform. So let me explain that, how we can do that. So that's OK. So how can we do it without croco-transform? So this is actually in a paper I have with one of my students. Yeah, a few years ago. So this is a paper in, I think, 15 maybe. So let me explain the idea. Actually, the idea is few lines. It's very simple. And it uses, oh, I didn't keep the equation on omega. That was a mistake. OK, I need to write down the equation on omega again. So we said that omega, which is dyu, satisfies same equation. So I can take an x derivative here. So it didn't help, right? So I still have the same problem. So basically, I have this equation on omega with this problem. I have this equation on omega x with this problem. That equation on ux with another problematic term. So two equations, each one of them has the problem. So how can we get rid of the problem? So the problem appears in both equations. Yeah, that's it. That's what we do. Exactly. So basically, what we do is you take this equation and you multiply it by dy omega over dyu. And you subtract. Now, when you do that, what do we get? Yeah, the bad term disappears. Of course, now you get a lot, a bunch of commutators and stuff like that. But the bad term disappears. So that's the idea. Of course, now, remember that what I have in mind is to do high order regularity. I'm not only interested in one derivative, so I'm interested in taking many derivatives. So basically, what I'm explaining here, I can just write it down as dt. I can take s derivatives. I can take s derivatives. So let me use this notation, omega xs. I'm using this notation just so that I can. So I'm taking s derivatives. s derivatives. s derivatives. So this term will also get s derivatives. Of course, now I have plus lower order terms. Yeah, so the lower order terms are OK, because they are mixed derivatives and so on. So there are a lot of. But this is the important term. The only important term is this one, because this is the one that has s plus 1 derivatives. So this is the real bad guy. This is OK, as I said, because it's a transport. This also is transport. So same here. I'll use the same notation, like with s, s, s. Here I have a term similar, but there are lower order terms. I will call it like it's more like lower order term. And here I have dxsv plus lower order terms. But you can see in both equations, this is the problem. That's the only problem. So then what we do, we introduce the notation gs. gs is omega xs minus dy omega over omega, because dyu is omega of uxs. So if I write it in a more standard notation. Now it's important, we use the hypothesis that dy omega is positive, right? The dy omega so that we can divide by omega. So if you don't have the monotonicity, this proof breaks down. Doesn't work. So actually, it turns out that this term has even a nice way. I mean, you can write it down in a nice way, because this is, again, this is dyu, right? This term, actually, you can write it down. If I remember well, it's dxsu over omega dy. Same as for the VT equations, the row, the row of torrent normization, because it's a derivative. Yeah, yeah, we can write it with locals. So it's this quantity. So basically, it means that instead of looking at this guy, instead of looking at this guy, you divide it, you do this. And the equation on gs doesn't lose derivatives. The equation on gs. So now if you write down, then you can write down an equation on gs. You get dt of gs plus. So we will keep the transport terms. I mean, this transport term with that one. So we have the transport term, right? These are the only terms that lose derivative, but it's a transport plus lower order. And of course, I mean, there are many, many commutators terms. I mean, like a lot of commutators to study and so on. But at the end, this is the structure we get. And then this structure is good for energy estimates. Any questions about this? Is it working short time or long time? Short time. Short time. So it's local existence. So what's happened for local existence? Blow up, or not blow up? That's what we are now doing. All these questions about possible blow ups and so on. Yes. But now this kind of, like for Prentall, I mean, there are, I think I'm aware of maybe two or three results about global existence for small data. So there are some, yeah, and also there are global existence result for the stationary problem. So stationary problem. OK, I didn't mention now the stationary problem, but I'll mention it. I mean, as of now, I'm just trying to do like the, I mean, there is another result of Oleni. OK, so this is one type of result. So you see, if I try to put here the local existence results that we have, so we have one type which is analytic. Then the second type is monotone data, but with Sobolev. So, I mean, now there are different extension of these things. One extension, so there is a paper that actually I wrote with my student and V-call and Koukavika. I'm sorry, before you go, this I understand what you do, but S would be fixed like what at the end? Maybe 10, yeah, 10. I mean, less. I think if I optimize, I think 3 would be fine. OK, so here I explained how we do the x derivatives. The y derivatives, you can take y derivatives also. I mean, in this result, I mean, here, as I presented it, it's just, I'm just writing it as if it's only x derivatives, but you can take y derivatives, right? Doesn't hurt. You can take y derivatives, so you can propagate Sobolev in x and y. OK, none of that, just one. So now, in terms of how these things can be extended, like with V-call and Koukavika, the four of us, we wrote another paper, which says that you can have regions where you are analytic and regions where you are monotone, and you can match, you can put the two results together. I mean, it's not a big improvement, but somehow it tells you that you don't have necessary to be monotone or analytic. You can be in some region, in some region you are analytic, in some other region you are monotone. And you can even have the monotonicity change. And the interesting things about this is that you can allow, like here, the limit here is 1, for instance, right? My limit there is 1. So I can allow here a region where my limit here is 1. My limit here is minus 1. And then in the middle, I am analytic, right? Analytic in x. So here I am analytic in x. Here I am monotone, here I am monotone, and you can match. OK? This is one extension. Now, another extension, but this I need to spend some time explaining it. There's a paper that I wrote with David Gerard Barre, where we do Jovray. But I need to actually motivate the Jovray regularity, OK? Actually, this result about Jovray, then there are a lot of other papers. Many people wrote papers, Don Qiong, Zang, many papers. I mean, many Chinese actually wrote papers about doing. I mean, in different situations, also improving our result. I mean, we had Jovray 7-4, so they went to Jovray 2. There are papers also doing the 3D. But I think the best result now is a result that David wrote with Helger. So David with Helger, they have, I think, a very nice result where they do Jovray 2 without structure assumption. So in my paper with David, we have some structure assumption. So we have to assume that the solution starts increasing then decreasing. Or it can, like, I mean, we have to have some condition of the critical points. And I mean, there is some assumption here. But OK, so after a lot of things, there is a paper of David Gerard Barre with Helger. I mean, his name. I need to check to make sure that I'm not making a mistake with them. So I'll try to motivate the Jovray regularity. I'll try to motivate the Jovray regularity. But just before that, I mean, without spending too much time, let me talk a little bit about the stationary case. Since I'm talking here about monotonicity, the stationary case. So the stationary case, you just get rid of this term. And then you try to solve this. OK, now, does this make sense? And how can you solve it? OK, the pressure can get it back. This I can put back the order thing. Of course, now I cannot impose initial data anymore. So it turns out this system makes a lot of sense if you choose you to be positive, for instance. And then you look at x equals 0, let's say. Now, I look at u positive, and then it becomes x will play the role of time. Now, this becomes an evolution problem in x, actually. So it's an evolution problem in x. And the question you want to solve to do, you want to solve this till a position x star. So you start with your initial data. Your initial data is to say u at x equals 0 y is some initial profile u 0 of y. So this is a very precise question. Like, you can try to solve this. And again, the person who understood this first is Olenik. Around, say, maybe a few years later, 64 or 60. OK, I'm not sure about the dates. But so Olenik, I mean, there is a book. She has a book about these problems. So when u is positive, you can prove some local existence. So local existence, meaning that there exists some x star such that you can solve. It turns out that you can also have both. You can assume this and also the monotonicity condition that helps. And now one of the result is what we call the favorable pressure. If the pressure is favorable, what does it mean? The pressure is favorable. So the pressure is favorable, meaning that it pushes you to stay positive. So if you are in this condition, this condition with favorable pressure. So favorable pressure means that dxp is negative. Because then, when you put it here, it becomes a positive term. So as you solve, it pushes you to stay positive. So under conditions like this, she can prove global existence. Global existence, meaning that you can solve x till infinity. So of course, you need to put the right regularity and so on. Now, her results are based on another transformation, not the croco, but von Meijs. There is another transformation that we call von Meijs. Test similarity with the croco transform that you apply. So is it possible to have it on non-start at x equal to 0, but at minus infinity? Here, you just have it on half a quarter of the things. And also, do you have a precise knowledge of what happens at x equal to infinity? So the limit is there, can it be a 0? That's a good question. I mean, these are arguments based on maximum principle. It looks to me like a traveling wave. Yeah. I understand why you are asking this question. Of course now, so this idea about, I will not even say traveling wave, but we have these particular solutions, which are like, for instance, y squared over 2. But then it doesn't match the boundary conditions. But that will be the candidate. So for instance, your question is very important. I mean, I think it's interesting. Maybe I didn't think about it quite enough. Like for the problem as it is written, like what is the long time, what happens when x goes to infinity? And whether we can solve from minus infinity. But for minus infinity, I mean, or yeah, can we have solutions that are defined from minus infinity to plus infinity? Right? That would be, yeah, I mean, of course, I mean, all this, I mean, you have room in choosing this guy. This is a free thing that you can do. Yeah, yeah, I think I don't know. But just for, I mean, related to your question, but is the fact that, let's say, if you take dxp to be minus 1, for instance, if you take dxp to be minus 1, then you can look at, you can take u to be y squared over 2. This will be a solution. It doesn't depend on x. This can play the role. I mean, actually, we, yeah, I mean, like it is, it is, it's playing the role of some soliton or something. That's my question. Yeah, but I mean, okay, but this is not, I mean, there's no x dependence here for this guy. Yeah, but you can have a boundary area. More generally, you can put a y here. Anything like this. Yeah, so. So, do you mean u is net u? Sorry, sorry. Yeah, if I put a minus here, I need to put a minus here. Yeah, but then it doesn't, yeah, at the limit, it's not that good. So, but in the, right, if I put a plus here, and I put a plus here, so that's the case where the pressure is not favorable, right? So this is more or less the setup of the paper I have with the unlaw. So with unlaw, that's the blow up, proving the blow up and explaining how the blow up takes place at the point x star. This is one of the, one of the objects we use. So we use this, the idea of this sort of soliton type. Okay. Okay, so this is, this is about the stationary problem. Yeah, so my plan, my plan is to go back to the stationary problem in July and talk when I talk about blow up results. So, so we have a blow up result here with unlaw. Okay, now, now I want to, I want to spend some time explaining these Geoffrey regularity results, right? So, in what I explained here, what I explained here, I did monotone data. I explained that for monotone data, we can do sobolef. I still didn't explain why analytic data is necessary. It's possible that, okay, I mean, we had a way of, we had a way of canceling that problematic term if we have monotonicity, I explained that without monotonicity, it doesn't work. And it's even more that there is a result. So let's go to the, I'm going to show you a result for this problem. So what is this problem? This is what we call, so I'm back to the time dependent problem. This is what we call the inviscid parental. Okay. This is inviscid parental. Now, inviscid parental, I mean, there are different ways of deriving it. I mean, it's not necessary that you start from Navier-Stokes, get parental, and then get rid of the viscosity. This model can also be derived from Euler. You can start from Euler, make some rescaling, and you can end up with this model. So now, I have to remember what type of boundary conditions. Now, of course, now this model, yeah, so we impose the same boundary conditions. No, okay, I need to remember. Yeah, usually when we look at this problem, we don't impose boundary conditions on you, only on V. So we don't need, so this is how you write the problem. I don't have viscosity anymore, so only V equals 0 on the boundary. So we can solve this in the half space again, same. I mean, this model also has also some, I mean, of course, usually we take it with 0, yeah, no pressure. So this will be the inviscid parental, okay. There is another model in the literature that we call hydrostatic Euler. So this usually people call it inviscid parental because it's written in the half space, okay. It's written in the half space, so this is the inviscid parental. And what I'm going to show you is that this problem is actually well-posted in Sobolev. This problem you can solve it in Sobolev. There is another model, which is the so-called hydrostatic Euler, which is exactly the same equation. Usually we put the pressure, but we take it between two plates, so the domain is this. And then we impose that V is equal to 0 here, V equal 0 here. V, capital V, what I'm calling capital V, okay. And here we put a pressure term. The pressure is not determined. You put, it's really, so this is inviscid parental, this is hydrostatic Euler. So it's exactly the same equation, but the only difference is you add, the pressure here is not determined. And the pressure depends only on X, it's like a Lagrange multiplier. It's a Lagrange multiplier to the fact that you are imposing V equal 0 at both boundaries. Here, there's no pressure. I mean, like usually the model people write it like this, you can. I mean, if you want, you can add pressure, but fixed coming from the Euler flow. It turns out that these two models, these two equations, even though they seem, they are the same. If you think about them, they are the same. The only difference is boundary conditions. One is in the whole half space. The other one is between two plates. You are imposing boundary condition here and boundary condition here. So if you think a little bit, you say, okay, should be the same, like in terms of well-posedness and so on. Turns out that's not the case. This one, we can do it in sobolef. Slightly, this one we can do it in sobolef. This one, no. This one has some ill-posedness result. And actually, with that same student, we have a local existence for this one under some type of monotonicity assumption. We have to impose some sort of convex profile. For convex profiles, we can prove local well-posedness for this. This was first done by Yan Brenie. Yan Brenie has a result that is comparable to the olinic type result. I mean, he works in Lagrangian coordinates and so on. With my student, with an idea similar to the one I mentioned here, very similar. Some trick of combining derivatives and so on. We can prove local existence in sobolef but under some convexity assumption. Okay, so my goal here is more to explain little bit things about this guy. And again, to insist on the fact that since this guy is solved, one can solve this guy in sobolef, you add the viscosity. You say, okay, the viscosity is a good term, usually. So if the problem is well-posed in sobolef with viscosity, it should also be well-posed. But that's not correct. So somehow you can see that for an equation like this, that one can solve in sobolef. If you come and change a little bit the boundary, change the domain and impose this boundary condition, it's not any more well-posed in sobolef. And same if you add viscosity, it's not any more well-posed in sobolef, right? So it's very striking in a sense. Okay, so let me try to explain how this one goes. And I think, okay, so this sort of local well-posedness for this model, I think it exists in the physics literature. It goes back, I guess maybe to the 60s or something. But at least the math paper is a paper of Hong and Hunter. And I'll explain it a little bit because the result is useful for a blow-up result that I'm going to mention. So we have a blow-up result about Prandtl. Not yet for Prandtl. We have it for the inviscid Prandtl for this model with Charles and Tej Gull. And it uses, it will use a little bit the method to solve this. Now, so the idea, I mean the idea is very simple, it's just characteristics, right? So the idea is, so David with Helger, they have also recently a different way of solving it, more based on energy, which I think it also has its own interest. But it's very interesting that, I mean, this equation, as it is written here, if you try to solve it by energy, by regular Sobolev energy, you cannot. I mean, unless you do the tricks and it's not completely trivial. I mean, again, it's really in the spirit of this sort of cancellation, but it's slightly more sophisticated. I mean, you have to cancel the right, the bad guys in a way. But I mean, you can see it immediately, like if you start taking S derivatives, like you will end up with equations similar to the one written there. But if you don't have monotonicity assumptions, I don't know what you can do with these terms, right? So, now the paper of Hong and Hunter is based on characteristics. So, I mean, if you forget about this term, this is Burgers, right? If you forget about this guy, this is Burgers. So, this equation is, u is just transported. u is just transported. So, if I am able to write down characteristics, if I am able to write down my system of characteristics, then u will be constant on those characteristics. So, that's what we'll do. We'll just write down the characteristics, right? So, I need to understand, okay, maybe I'm going to make a change of notation. I'm going to make a change of notation. Just this I will write at small x, and this one I write at small y. And, okay, let's write down everything small. The only reason is that, right? So, everything is small, and now capital X, capital Y will be the Lagrangian coordinates. Okay, so, I want to understand the characteristics. So, I want to understand small x of t, xy, small y. So, of course, x dot will be my u, y dot will be v. Makes sense for everyone? So, the interesting thing is that u is constant along the characteristics. u is constant along the characteristics. So, then this is a constant. This guy is a constant. It just depends on x, y, on the initial data. So, then my x, my x becomes, of course, normally I need to say that x, y, t equals 0 equals x, y. So, then my x will be capital X plus t u0, u0 is the initial data, x, y. It's like burgers. I mean, this is exactly as if you are doing burgers. Okay, so, now comes an idea. How can we find y? How can we find y? It's not trivial. Like, y is not trivial because v is non-local. v is non-local. So, it's complicated. If you want to find y, it's not. So, the idea, the interesting idea is that your vector field is divergence-free. So, your vector field is divergence-free. So, what that implies is that the transformation is volume-preserving. Okay? So, without doing anything, you know that the transformation capital XY is XY for each time. This transformation has to be volume-preserving. So, this is volume-preserving. So, then how do I determine small y? I don't need to solve the equation, but all I will say is that the determinant should be 1. So, the determinant is 1. So, it means that x, x, y, y minus, that's the equation that we get now on y. This is really the idea. Because otherwise, I mean, if you want to solve y dot equal v, it's a nightmare. You cannot. Okay, now, how can we solve this equation now? I want to find y. I mean, at the end, I need to find y. I need to find y of txy. This is what I am after. I need to find this quantity. Of course, now, this problem we are solving it in the half-space, right? And on the boundary, on the boundary, v is zero. On the boundary, v is zero. So, actually, y on the boundary doesn't move, right? On the boundary, nothing moves. So, on the boundary, it's really burglar. I mean, on the boundary, you can just think about it as really burglar, right? Because, so the idea is to try to integrate something from the boundary, right? So, again, I mean, this is a transport equation. Because this guy is given. This guy is given. Okay, this guy is given. And, okay, I can compute them. So, x, x is 1 plus t u0 xxy. xy is t u0 y. So, I can solve this as, I can write this as, I'm going to divide by this factor. Okay? Can you assume a sign condition on u0 x? Okay. So, that's true. So, now, this has, again, very similarity with burgers, right? So, if you are solving burgers, if you are solving burgers, you need, you can solve as long as this guy is positive. And, when this guy becomes, when this guy becomes zero, you get blow up. And, that's exactly what the, that's what they say. The paper of Hong and Hunter, the paper of Hong and Hunter, they tell you that we can solve, we can solve. So, that's their theorem of Hong and Hunter. So, you take t less than minus 1 over minus u0 x the mag, okay? Let me not write it. It turns out that in what we are writing with Charles and Taj, we can do slightly better. Actually, that will not be necessary the time of blow up. And, somehow, the idea is the fact that you have two variables. Okay. So, this is like the breakdown of the burgers way of thinking. But, since now you have two variables, it's possible that you get slightly, I mean, we have a better way of justifying this, using the fact that there is a y component also. Okay, depending on how much time I can spend on that. But anyway, so, now, this equation, how can we solve this equation? So, there is another idea. There's another idea is that, what are, what are the characteristics of this equation? So, if I put a zero here, right? If I put a zero, what are the solutions? If I put a zero, the solution is just x equal constant, small x equal constant, right? So, if I take a zero here, small x equal constant is a solution. So, basically, to solve this, I need to integrate this guy, a longer curve that starts from here, such that small x is a constant. So, I just take the small x and go back, okay? I mean, one has to convince himself, but, I mean, it's easy. Like, if x is a constant, you put it here, that will give you zero. So, this gives us actually a formula. Maybe I'll just give you the formula that's... So, x equal constant, how do we write that? It means that your x equal constant, I think about it as capital X is just a function of x, y and t. So, I'm just inverting that relation, right? So, they have small x as a function of capital X. I invert it, right? So, you see here, I invert this. I think of capital X as a function that I call c of x, y, t. So, and then here, I'm integrating this over a line where x is a constant. So, then I end up with the following formula. So, I'm integrating this guy. So, capital X, so this is z. So, again, you know what is your small x here. So, you know what is your small x. So, this is a number fixed. You call it x bar, right? That small x there is the small x of capital X, y. So, it's a constant, it's fixed. So, then I look at the curve that goes back here where my small x is constant. So, this is where all these points have the same small x. And just you integrate this quantity from 0 to y. So, that's all. Okay, now you have a formula. I mean, you can solve your characteristics. You have a formula. Okay, you plug it in and that gives you the solution. You can now check back that this gives you the solution. Now, of course, now all this suggests that maybe you can do something similar for Prandtl. I was planning to, yeah, I went maybe very slow also today. I was planning to show you the ill poseness result for the linearized Prandtl. So, let me just take two, three minutes. I mentioned this because I don't know if I will have time next in July to go back to this again. But, so this is an interesting paper by David with Dornie. So, David Girard-Varais with Dornie, where they prove that the linearized Prandtl, so if you take Prandtl, you linearize it around something which is not monotone. So, you need a critical point. So, you need a critical point. So, you need a change of monotonicity of your profile. They prove that the linearized problem is ill-posed. So, you can linearize. So, you can take a profile what they call us of y. So, of course, any profile like us of y, I mean, what do you know the viscosity term? I mean, but if you take us of y for the inviscid problem, then us of y is just a solution of your inviscid Prandtl, right? Because the v is zero and the u, ux, u is zero, so nothing is happening. So, now you can linearize around guys like this. What you get, what you get, you get the following linearized equation, linearized prime because us is just a function of y, okay? So, let's take this. Let's take the linearized Prandtl without pressure. So, you get this equation. This is the profile that you are linearizing around. And what they prove, what they prove is that this linearized equation is ill-posed in, in Sobolev and more precisely what they prove that you can get growth, okay? So, of course, here there's no x dependence. So, you can do Fourier in x. You can do Fourier in x and you can get growth which are like exponential, okay? So, you can look at exponential i kx, okay? Because it's, and then you can get growth which is of this nature. This is the type of growth. So, you can get solutions which are exponential i kx times something like that. Times something, some function of y, of course. But this is kind of growth that you get. So, of course, when you get to growth like this, this is, this suggests that maybe you should do something in jeuvre irregularity. And actually the result, I mean, after a lot of work, the final result of David with Helger, they do the non-linear problem in that same regularity. So, they get really the critical type of regularity. Okay, so, okay, so again the plan for the next two lectures, July 1st and July 3rd. So, one of them, one of them the plan will be to talk a little bit about inviscid limits with Prenton. So, there are, there's a result that I have with David and Mayakawa, where we do some justification of the limits. I mean, related to all these jeuvre irregularities. So, I think I will spend some time maybe recalling the result of David with Dormi and then some of the developments on how we justify the inviscid limit with Prenton but without an electricity. And then the other lecture will be about blow up. Blow up for the stationary problem and for the non-stationary problem. We'll see how much we can cover from all that.