 last time, so that I can make the connection. So let me try to write a bunch of equation on the board. So what you will find is the paper is the following thing. So we were looking at d-focusing in ls. So u is complex, and x is in rd. And what we want, and what we want, and so we did a change of variable. So we change variable. So this is first self-similar, and then the dynamical rescaling. And we arrive to the following equation. So we arrive exactly to the following system. So d tau rho. So I'm going to write it down. It's minus rho Laplace psi minus mu l r minus 1 over 2 rho. So I'll just put a rho. Minus, I have this. I have twice d z psi plus mu z times d 0. And I have this second equation. So I have rho d tau psi is equal to b square Laplace rho minus, and I have these ugly guys. I have grad psi squared plus mu r minus 2 psi minus 1. Plus mu lambda psi plus rho to the p minus 1 rho. So I just remind you that lambda. So z is the radial variable, and z is just if you want to h test the radial variable. What I call lambda is just z dz. And there are a bunch of parameters floating in the picture. So l, for example, if I am working with n ls, it's 4 over p minus 1. And r, it's 2 over 1 minus e. And I define so mu. I think the relation I think that mu r minus 2 is 1 or something like this is this parameter mu. So in fact, it all relates. Everything is related to this. So the free parameter is e, where e is some strictly positive parameter. This is necessary. And b is a function of time. b of tau is e to the minus eta. So it simply means that I have a bunch of numbers. It's not the point. So I have a coupled system of two things here. And of course, the key is that there is one parameter here. This b, I want to treat this b as a perturbation. So this is I want to treat this guy because there is exponential smallness in time. I want to treat this guy as a perturbation. So let me tell you what we're going to do today. We're going to run basically the program. And the program is the following. So first, do b equals 0. I'm going to call my equation star. So first, do b equals 0. So second, solve or find what I call rho p psi p, like profile, stationary solution to star. So this is what we started the other day. And this is equivalent to having a self-similar solution of compressible Euler with blow-up speed. With blow-up speed. And then, and this is what I want to focus on today, study the stability problem. OK, so the stability problem is exactly this one. Remember, this is after renormalization. So if I want to make a blow-up solution of the root problem, what I need to do, I need to construct a global in time non-vanishing solution to this equation. And the way I'm going to do this is simply by taking my stationary solution there and construct initial data such that the corresponding solution will be global in time. And in some sense, to be made precise, globally attracted by this profile. OK, so this is the strategy, so the stability. It's still under the assumption that b is equal to 0? I see it like that. 2, the number 2 is 4b different from 0 already? So b is here. b, b, b, b. Yeah, I talked about the strategy. You have 1 and 2, so number 2. So you have absolute right. So 1, b equals 0. 2, this is for b equals 0, you're absolutely right. So I solve absolutely. I find a stationary solution because b depends on time anyway. So I find a stationary solution for this problem, which is simpler. And then I say this stationary solution, you see formally, if I tell you that the leading order term is some stationary in time tau function rho, this is exponentially small in time. So it's very reasonable to believe that maybe you can treat this as a perturbation, right? It's a very simple answer. So this is exactly what you want to do. But of course, you could say there is also, so my point is the following. This is the NLS equation written in phase and modulus. There is no b equals 0 system for NLS, it doesn't make sense. But if I was to do this analogy, if I was to run the same program with fluids, then of course there is a b equals 0 problem. And then I could simply make theorems about the b equals 0 problem, which is exactly what we did. There is a whole range of stability analysis, which concerns Euler. And then you can ask yourself, if you could stabilize brought profile in Euler, are they admissible? That is, are there such that I can also stabilize this problem? So this is why for in between Euler and NLS. In principle, you would not do directly. If you wanted to do directly, they should not do directly. Say, forget about the order. I just want to do directly that you would have problems because you don't have an interpretation of b equals 0. Absolutely. It doesn't make sense. So I need an analysis that treats, of course, b equals 0, but is also able to treat b non-zero. OK? OK? Yeah, please. Just to interact you again. E strictly positive only corresponds to supercriticality. No, no, absolutely not. b strictly positive is something that I enforced from scratch. I forced this parameter b to be some exponential. I'm sorry, these notations are awful. It's exponential of minus eta e. E is a number, OK? But let me say it again. So when I do, so this is what I said. So if I ignore, if I do b equals 0, if I do Euler, there is a free parameter when you renormalize Euler, which is the blew-up speed, that is because it's a two-parameter scaling symmetry. So if you look for self-similar solutions of Euler, there is a free parameter. OK, this free parameter, it's, so I call it, I call it, so this is what we call the blow-up speed. OK, it's this parameter r. OK, so it's exactly my e. OK, it's set here. OK, so this free parameter for Euler or this free parameter here, I mean, it's the same guy. So there's one free parameter. So I have always free parameter in my problem. I have dimension. Dimension. I have l, which is the non-linearity. So it's p, or it's gamma if you're doing fluids, but this is l. And I have r, which is the blow-up speed. My question was, where do you see the supercriticality? Because this thing you could not do for four. The fact that your energy is supercritical, right? So it's already, so let me recall this briefly. And let me say this words again. So let me. So if it's too long. No, no, no, it's not. I'm just because I wanted to say this. So step one, I do b equals 0. Step two, I look for self-similar solution. OK, so I want stationary solutions of this equation. OK, and through an end and transform, it maps my equation onto something that maybe, so if you agree, I'll just erase this so that I will not, we'll just erase this. So the strategy is clear, right? So I'll just erase this. So this is step two. So when I look at the stationary equation, after I change a variable, which is called an end and transform, I map my system. So x is log z. And what I have is exactly this. I have delta d omega dx is minus delta 1. And delta d sigma dx is minus delta 2. OK, and delta, delta 1, delta 2 are functions of omega and sigma only, not on x. OK, so I've mapped myself onto something. And roughly when I draw the face portrait of this thing, what I see if this is sigma, if this is omega, so maybe this is, because I'm going to need this later anyway, so let's just take a minute to redraw this. So if this is sigma, so this is my sonic line. So this is delta equals 0. This is the points which we called before. And what I have is the following. So I need red, which I have here. So what we have is something like this. So this is the line. This is delta 1 equals 0. And I want to put myself in the situation where I have something like this. This is delta 2 equals 0. So I have a set like this. So this is the so-called point P2. And what I have is the following thing. So I have a trajectory, which is going to do this. It's going to cross inside here, and then it's going to come back, and it's going to go here. So maybe I should dash it. So this is what I said last. And this point is P5, which is this point here. So what I said last time is the following thing. On this phase portrait, there is a unique trajectory, which comes from here. And it corresponds to the smooth, radially symmetric solution. And in a certain range of parameters, so I need r to be less than r star of d and l, which is d plus l over l plus square root of d. Then in this range, the unique trajectory, which comes here, is going to cross through P2. And if I'm lucky, then I will need to discuss that. I'm going to go back to P4, which is self-similar decay, which is what I want. So let me say these words again. So this unique trajectory is the unique smooth solution at the origin. Then it evolves. And I have this picture, because I've made an assumption. I've made an assumptions like this. And that's typically already in the supercritical range. Just asking for this to be the picture is already putting you away. But it's not enough. You're even further. This is what I call supercritical numerology. It's already a condition. So typically when you write this condition, so this is the typical condition for Euler. So what I said the other time is that if you want for NLS, you want r, which is 2 over 1 minus c. You want e positive. So you want this bigger than 2. So this implies that 2 is less than r star. And this already demands d bigger than 5. That is, if you want a nice profile for Euler, you're already in the supercritical range. If you want this profile to be compatible with the NLS equation, that is with the fact that this term can be treated as a perturbation, you need to be even beyond. So this is like there is a condition. It's like the Joseph Flungard in some sense. It's already telling you that you need to be further away. The algebra for fluid, for Navier-Stokes, is not the same. That is, the equations are not exactly the same. This is exactly what you told me at the first conference. Rho is not modulus of u squared, or is it modulus or not? That's exactly the point. There are factors all over the place. So you do your algebra, and you realize that if I remember well, I think I need r to be bigger than 2 plus l over 1 plus l. And when you do your algebra, you discover that this demands. So what demands this is, so this implies that d is bigger than equal to 3 for NS compatibility. So it means that you need, this means that in dimension 2, we can certainly say things about self-compression, or the stabilization of self-similar solution to Euler in this sense. But we know that in dimension 2, if we take such profile, we feed them to the corresponding machinery. The parameter b will not be small. We tend to grow. So it means that the viscust term, as you expect, at least for this profile, has a destabilizing effect on singularity formation. But in dimension 3, there is a range. It doesn't work for all nonlinearity, but it gives you a window of nonlinearity for which you can hope to do that, which is exactly the way things have been. So it gives you a window where you can pretend, or you can hope at least, that the viscust term or the additional quantum pressure there is lower order. So this is the picture. And what I said, I said the other time, this is how we finish this, and this is what I want to make maybe a little bit more clear to today. And this is something that has been seen on other products. You have to be careful. So in fact, there are tons of this unique trajectory. It's certainly going to go through P2, but then something funny happens at P2. And I'm going to recall this maybe later. So there is a generic limited regularity at P2. Another way of saying that is that I could very well construct a solution to my ODE in the following way. I take the unique curve that enters here. I take any curve that comes here, there are tons of them. And I claim that the corresponding solution that I constructed this way has some regularity. I can make this regularity as high as I want, interestingly enough, by playing with my parameter R. But I will never be smooth, because in fact, there is a unique, there's only one solution that passes in a same infinity way through P2. I'm going to recall you this. And there is no reason why it should be this one, unless I choose R in a very careful way. And I want to explain briefly why this is an issue. But this is something to be kept in mind. So I want to, before motivating that, so I want to pretend that I don't have this problem and I want to jump to 0.3, I want to tell you about the stability problem, which in particular is, of course, it's connected to the question. So I have, of course, a linear problem. So I need to look at the linear explore. And I need the machinery to close nonlinear estimates. So let me maybe just, so just in order to write things that are correct. So what you can do, so the first thing you want to do, maybe let me, let me write. So what's the strategy? It's very clear. So this is my equation. The first thing you want to do is you want to linearize your flow close to your self-similar profile, which is nothing but the stationary solution of this. You want to say I have a linearized operator, and you want to study what this linearized operator does. And of course, the way you want to do it, at first, again, when I look at the stability problem, of course, the first thing you do is, again, you stick to b equals 0. And then, of course, we need at some point to remember that b is non-zero. And I'll say a word about this. But roughly, the main part of the analysis is really to do b equals 0 and understand the linearized flow close to one of this profile. So typically, let's just take a second to see the kind of thing that we have. So you can linearize the equation. So you will just expand a row. You will change this for row profile plus correction. So maybe I'm going to call the correction row as well. What I'm doing is clear. I'm just expanding. So this is c profile plus correction. So maybe this is row total, like t like total. So this is the full non-linear solution of this problem. It would be for row t and psi t. And I just expand as row p, psi p, where row p, psi p is my self-similar solution. I expand. And as a matter of fact, if I want to be precise, just for technical reason, I don't like to work with psi. I prefer to work with what I would call phi. But it's really the same thing. So phi is row p, psi. I'm just shifting because it's more convenient in the analysis. OK, so you just send it in. You shake it out. You use the fact that, of course, row p, psi p is a stationary solution to your problem. And you will get an equation for the profile. So the typical thing that you get is something like that. So you get d tau row is equal to. So let's just take a second to write this down. So I have h1 row minus h2 lambda row minus laplace plus h3 phi plus 0. So this is the first equation. And I have d tau phi is equal to minus p minus 1 q row minus h2 lambda phi plus I have something like h1 minus e phi plus g phi. So let's see, what did I introduce here? I just introduced some notation just to give you a feeling of what's going on here. So 0 and g phi, these are just my nonlinear terms. So these are my nonlinear terms. They are whatever they need to be. So they are the formulae, the corrections, any sort of quadratic terms that you created there has been put there in front of the garbage. Again, I took b equals 0. So b square laplace is not in this picture. And I need to tell you what h1, and this is where things start being interesting. So I need to tell you what do I mean by. So I introduce this function h1, h2, and h3, and q. So this is just algebra. So actually, I can write down what it is. So h1 is just something like this. So it's minus laplace psi p plus mu l r minus 1 over 2. So this is the first guy. So this is h1, h2. I prefer to think of it this way. So it's also equal, actually. So maybe it's more clever to write this down in terms of the end and transform. So these are guys that are given to me by my profile. So the typical kind of formula you have is this. You have mu l over 2, 1 minus omega times 1 plus sigma prime sigma. So my profile rho p psi p is equivalent to having this trajectory. Maybe I can put a p here. Just after transformation, just omega p sigma p. So every time, it's the same thing. It's just a change of variable from here to here for the stationary solution. So h1 is equal to this. h2 is equal to mu times 1 minus omega. And h3 is something to have. So h3 here, h3 is Laplace of rho p divided by rho p. And finally, maybe the most important quantity is q. q is rho to the p minus 1. So what do I mean by that? I simply mean that you see a linearized operator. And of course, the coefficients h1, h2, h3, et cetera, q here, they depend on your profile. It's the standard thing you should expect, you linearize. And of course, you linearize a quasi-linear wave equation. So you see these profiles appear with the really modified, the highest number of derivatives in your equation. So they depend on the profile rho p psi p. Let me say, however, let me say again that the fact so here, this is the origin in space. So I have a smooth solution. And then eventually here, let me remind you that this is decay here. So if you want in space asymptotically, of course, I have some of these coefficient disappear. I can understand what happens as z goes to infinity. So I have sort of a limit structure if you want of my operator as z goes to plus infinity. So there is one difficulty that we will have to face. One difficulty which we will have to face is that there is no formula for the profile. So it simply means, you see, actually, it's very interesting. On this phase portrait, sometimes you have formula. There are the celebrated set of explicit solutions. So sometimes in some regime of parameters, some of these, so not this one, but some other trajectories may come with formula. There are hidden first integrals and these kind of things, but it's not the case here. At least not that I'm aware of. So one of the difficulties that we have to face is that really the way we understand the profile is just a curve on the phase portrait. So now you ask me, is this quantity positive? Is this quantity bigger than the square of this other one? I mean, what's going on on? Well, you need to go dig for it. Nothing is given to you. Maybe it's very simple. Maybe you're lucky. Maybe it's not. Of course, there is structure in this phase portrait. You understand that it's not anything. But you have to be careful. This is the picture of omega in terms of sigma. If you write omega as a function of x and sigma as a function of x, you will see that this guy is far from being monotonic. It's an unpleasant, this is an unfriendly guy. So this is one of the difficulties that we will have to face. We don't have any formula. So this is the first thing. So now I need to explain, so how do you deal with something like this? So maybe there is something completely elementary. So I promise not to erase this. So these are my questions. I'm going to keep this. So one simple observation, of course. So I actually, already here, depending on who is doing the analysis, people may have different point of view on this kind of things. Typically, specialists of fluid mechanics they tell you, I have compressible Euler say, I'm in the radially symmetric case. They will tell you, well, I'm going to use my Riemann invariance and make my Riemann invariance evolve. And this is how things can be written if you want to. But however, there is one thing that is very clear from here. And it's completely standard in incompressible fluid dynamics. Of course, this thing, I mean, it's something that I don't need this machinery of Riemann invariant, which is very radial. I can simply say, of course, what I'm facing here, we're just facing a wave equation. This is just the wave equation. And maybe the simplest way of seeing this, you see what you do is you take d tau of the second equation. You compute d tau square phi. OK, so you have a bunch of cross terms which are going to come here. And of course, then you're going to hit d tau rho. d tau rho, you're going to compute from the above. But it's going to ask you for rho. But rho, you compute it here. So you will have an equation. So at the end of the day, you get something like this. You get d tau square phi. And this is how I'm going to think about this. So I get exactly this. I get p minus 1 q rho plus phi minus h2 square lambda squared phi minus twice h2 lambda d tau phi plus n1 lambda plus a2 d tau phi plus a3 phi equals plus nonlinear terms in rho phi. This is my equation. This is what I have to study. And you understand that, of course, a1, a2, a3, again, they are computed from h1, h2, h3. So they depend on the profile. They depend on the profile. OK, and you understand immediately what you're facing here, so let me remind you. So lambda is z d z. So z is the radial lambda is z d z. So what I'm facing here, of course, the principal part of my operator, so it's very clear. So I have two derivatives here in space, but I have two derivatives there as well. So and in particular, when I count the number, if I look at this thing, p minus 1 laplace, p minus 1 laplace minus h2 squared lambda squared, of course, the highest number of derivative is, I'm sorry, there is a q here. The highest number of derivative is p minus 1 q minus h2 squared dz squared plus blah, blah. So of course, you should expect, you have to be careful, probably, this may vanish. And this will vanish. And of course, this degenerates. It vanishes at p2. So it's meant, of course, in some sense, you already degenerates there. Well, of course, you PDE, you linearized operator, has an issue there. Of course, it vanishes at p2. It's meant to be. But of course, you understand that you do not have only, of course, derivative in space. It's absolutely fundamental. You have tons of terms here. You have, of course, mixed. This is mixed space time derivative. So this is mixed d tau dz. And you have first order terms. And you have a 0 for order term. So this is order 1. This is order 1. And this is order 0. OK, and at this point, you can't really throw anything to the garbage. So you need to understand what's going on with this guy. So I think this is incomprehensible if you don't make a connection with what people have done on singularity formation problems before. OK, so let me explain, just conceptually, how do you address these kind of questions on much simpler problems? OK, so how do we, you know, what kind of estimates? What kind of estimates should we expect? And let me try to, again, just give you an answer in the completely elementary case. So let me go back to just something elementary now, the nonlinear heat equation. Because I need to explain conceptually what's going on here. Because otherwise, it's just an abstract box. This will not change the sign, however, right? I mean, my plan should be true, but oh, yes, it will. It can even change the sign. Yes, yes, yes, of course. So you have a nice problem. Oh, yeah, yeah, it's a very unfriendly guy. OK, but and I, you know, let me put it this way. I think we would all have dropped the pen, seeing this monster. But so many people worked ahead of us. That there's a whole, you know, there's really a connection with tons of things that have been done before, and which, you know, allowed us to find a way to deal with this. OK, but this is very unfriendly. There is no doubt that there is something. But again, you know, very clever things were done ahead of us. And this is going to give us the intuition of what should be done here. So let me go back to a completely trivial problem. So I just look at, again, dTu is not plus u, plus I can put u, or the least of u, to the p minus 1. OK, so this is my root equation. I want to study singularity formation for this. And then I just do my dynamical change of variable, as we've done several times. So I just say I zoom. I have 1 over lambda of t to the 2 over p minus 1. I do v of tau and y. y is x over lambda of t. I'm sorry, d tau dt is 1 over lambda squared. And I declared that here, in this case, I can decide that lambda is the square root. OK, so I renormalize self similarly. And I know what I get. I get another equation, which is what I call the renormalized flow. So it's d tau v is Laplace v minus 1 half 2 over p minus 1 plus lambda v plus v v to the p minus 1, where again, lambda is y that grad, or call it z d z, whatever you want. OK, so it's just exactly the same flow with this new term. That's the new guy. So the question is, what kind of estimates can you expect? So imagine either that you want to estimate things here, or imagine that you are linearizing close to a given stationary solution of this equation. And you ask yourself what kind of estimate you can get on your flow. So of course, there are two things you can do. So maybe let me make a point here. It's always the same story. So if this is u, this is x, this is u, this is u of t and x. So I imagine that u is concentrating, right? And I map this. I zoomed on the region here. I zoomed here on the window. And so because I define y to be x over the square root. And now I imagine that my profile in y, maybe I have to think that it looks like a given self-similar profile, which is given to me plus correction. OK, but so you have to think that estimating y in a large region just means that x can be small and y is still very large. So estimating solutions after renormalization, of course, simply means that I'm looking at things in my concentrating window here. So this is what needs to be think. So there are two kinds of estimates I can do here. So first kind of estimate. So estimate one. Well, think of it this way. Imagine that the equation that you have here, you know, you can, I mean, if you linearize, you can even think of the linear problem if you want to. I can do something completely stupid. So there is something I can certainly do. I can estimate a global sublash norm, right? So I will simply say the following words. I will say that if I compute in Rd the gradient, I don't know to some power s of u squared ds. Well, so this is just going to be something like so. I just unfold my change of variable. So maybe I'm going to have one over lambda to ds. I'm going to have lambda to the 2 over p minus 1. I have grad s of what I call v squared. And of course, this is dx. So I've set y is x over lambda. So I have lambda to the dy. So I have something like so this is lambda. So maybe I should put it like this. It's one over lambda to the 2s minus what I call s critical. And of course, s critical is the d over 2 minus 2 over p minus 1 times my sublash norm times my sublash norm square dy. So what does it mean? It means that imagine that you just get rid of this thing. And you just look at this linear operator here. And what I'm telling you is that when I compute, maybe on this picture here, what I'm telling you here is the following thing. I'm telling you if I compute a global sublash norm of my v in y, dy, then this is equal to nothing but lambda to the twice s minus sc, the sublash norm of my root problem in u. So this is grad s of u squared dx. And then you have to remember I did a change of variable lambda. Lambda is the square root of t minus t, which means that when I go to tau time, in fact, I'm equal to e to the minus tau. So what does it mean? It means that if I look at this linear problem, I get rid of that. Then I can compute exactly the evolution of sublash norm. Norm above scaling will decay exponentially. Norms below may have a problem. Because, of course, for the linear problem, this is concerned. Because what does it mean? It means that this guy is your friend. This is your best friend. This new operator here, it's the guy simply. It's an artifact of randomization. It's the machine that can allow you to indeed get decay. But of course, you need to take sufficiently many derivatives. So it depends on the scaling of your problem. It depends on the renormalization that you did. It's already written here. So if you want to get, so it's a natural functional setting to derive decay in time. And exponential decay is simply to compute sufficiently many derivatives of the equation. But you understand, of course, the nice thing with this is that it's a global norm. And sometimes it has nothing to do with this picture here. It's really something global. It's just a change of vibe. So this is nothing to do with the cohesivity of the top or the second order of this. Laplace minus 1 half. So what I'm saying is that if I have this operator, if I compute sub-left norm. Which may not be cohesive if you are saying everything. Well, is it cohesive? It depends what you mean. This thing is whatever it is. But all I'm saying is that if I compute sub-left norms on this operator, I'm going to find this. I mean, this identity is completely trigger. Because I know the way I think of it, that you know that you're coming from a problem above. So it's just an artifact. This thing here is just an artifact of my change of vibe. If I have the wave equation, if I put a t square here, I can play exactly the same game. I'm renormalizing. But I know that above my sub-left norm are conserved. So if I write down whatever, I'll do this in a second. But if I write down whatever analog of this problem is, it's automatic that the linear part, whatever this will produce, and it's going to produce something like this, whatever it produces, it comes with this idea that if I compute high enough sub-left norms, I will get decay. So this observation is completely universal. It has nothing to do with the heat equation. It's a very simple. So maybe I'll erase this. I'll redraw this later. So this is a universal machine. This is a way to think that this kind, yeah? That's why the sign there doesn't make up any sign. That's why the sign, this is what you mean exactly. Whatever structure these thing has is compatible with this computation, because it's an artifact of my change of vibe. It's only two for high data. You need to take sufficiently many of them. You need to be scaling, because it's written exactly here. Low sub-left norm tend to grow. It's the other around. Because in this, something is being created. So you need to learn how to, from scratch, you need to understand that you have high, you need to have high enough derivatives. But of course, this is the first way. This is something completely universal. This can be used for LLS, for the wave. I'm going to get back to this. But there's a second way. There is a second way to get estimates, which is absolutely not this. This is just an artifact of renormalization. I'm just mapping things back to my root problem. But there is another way. And this is something you learn from giga-core. What they observed, it's a beautiful observation. Of course, what they observed, so this is my renormalized equation. So actually, they did it at the non-linear level. But you can also, of course, do it at the linear level if you want to. So when I look at this equation, there is another set of estimates, which has nothing to do with the root problem. It has nothing to do with what's happening here. It's something else. It has to do so in the giga-core. The basic observation is that, of course, the operator Laplace minus the focal length, 1 half y dot grad, can be realized. This is nothing but 1 over rho divergence of rho gradient, where rho is e to the minus y squared over 2. So what does it mean? It means that there is a natural energy identity. So in particular, it means that if you hit the equation with d tau, you integrate by parts, and you will find something. And of course, you integrate, but you put the measure rho dy. You integrate with this measure. Then what you will get is something like this. You will get something like this. You will get some quantity. So you have something like this. You have d d tau. You have the integral of rd. So you have grad d squared rho dy. Typically, you will have a term like so. I suppose it's maybe so you will have a tau like minus 1 over p plus 1. This is your nonlinear term, d to the p plus 1 rho dy. And then I'm still left with this term here, which suppose we come with a plus and maybe a half. So I have a half p minus 1. So maybe there is no half, actually. Yeah, I think it's like that or something like this anyway. So I'm going to have v squared rho dy. And of course, this is going to come with I hit everything with d tau. So I'm going to have minus the integral of rd. So I have d tau v squared rho dy. And remember rho is e to the minus y squared over 2. Something like this. OK, so think of it this way. This has nothing to do with the original equation. This is a machine. So of course, this is written at the nonlinear level. But of course, if I was to, I could, so this is how a giga code used it. But let me just make the following point. Of course, I could write this at the linear level. And my hope would be to control norms like this. It's typically you want to control norms like the integral of rd of v squared e to the minus y squared over 2 dy. It's exactly the kind of quantity you would like to put your hands on. So what does this quantity mean? It means that this is, you see this exponential weight, of course, localizes. It's a very strong. It means that you control the flow locally in space on the singularity. So it's really something you should think of. Maybe I control my flow from 0 to 100, something like this. So I can hope to have control in time, decay in time, of this kind of norm. This is typically what I can hope for. And then once I had this first kind, and this estimate has nothing to do with the global sublet. It's something different. So now if you look at the analysis that you can do. Does it connect to the derivative, the target? You've got to estimate it if you take many days. It does not. But it must be connected. This is, well, OK, so do you have an excellent point. So let me put it this way. So this is very interesting. So this is where there is a bifurcation from the parabolic world and the literature, and then what's been done later with you. You will not see sublet norms and this kind of thing. This is not the way people think about this. People think about this typically in the parabolic literature by saying, well, I have control of the solution on the singularity, and I want to propagate an infinity estimate, which I typically do using some maximal principle or this kind of thing. This is typically the kind of way. Because of course, I cannot, you see, a norm like this. It's not because you control something like this that you control your solution. Because of course, it localizes very strongly on the singularity. Maybe something wants to get wild. Maybe something wants to grow. So a norm like this alone cannot control everything. It's just part of the story. So this is exactly the point. So how it's done in the literature depends on the problem and depends on this kind of identity. Control of this kind of norm pre-result fundamental rule. But my claim is that then you need to couple this with something else. And you can think that one completely universal way that will apply to anything is to boot this kind of control with this kind of sublet, which is exactly what we're going to do. You put these two together, you shake it out, and you're going to get what you want. So it's a mix between really local control on the singularity and something that simply sees scaling. You want to propagate whatever you have. This is exactly how we're going to do things. We're going to get estimates here. And then we're going to propagate them using this kind of sublet thing. And then we're going to ask our operator to do that. So as far as making estimates is concerned, you see the beauty of this kind of thing. So imagine that you took your equation here. And imagine that you decide to linearize your equation close to, imagine that you have just to make things clear here. Imagine that you linearize exactly like I'm doing here. Imagine that I tell you that I have a profile, I don't know, VP, which is a solution to the stationary problem, a self-similar solution. So let's write that grad VP plus VP, VP to the P minus 1 is equal to 0. And imagine that you say that V is VP plus a correction. I don't know, let's call it omega. Then what's the omega equation going to be? It's going to be d tau omega is linearized omega plus nonlinear omega. And what is going to be linearized omega? It's going to be exactly this 1 over rho divergence of rho grad omega. And then I'm going to have a nonlinear term here. I'm going to have something like plus my profile. So how did I call it? VP. So I'm going to have VP to the P minus 1 omega. This will be my linearized operator when I linearize here. And I have a bunch of nonlinear terms. And so the observation is the following. What's the impact? So in some sense, this you know where it does. It's completely self-adjoint. And you know where it does. But of course, this thing, you can ask yourself, what's the impact of this thing? And here the key is that this decays. This decays in space. So in fact, you can think of this. This is like a compact perturbation of this thing. Meaning what? The only thing this guy can do is create eigenvalues. So how many it creates, it depends. But essentially, spectrally speaking, this is just a machine to create eigenvalues. This is a machine to create positivity. So modulo of finite number of eigenvalues. Because I decay, this thing is going to get me exponential decay in time. I'm going to have a spectral gap thanks to this kind of identity there. So I have very good control of my solution in the inner zone. And if I want to close. And of course, because it's very localized, I cannot close my nonlinear term from e to z. But then I can couple these estimates with maybe some sub-left norms. Or if you're doing parabolic things maybe with softer maximal principle type of estimate to actually control my flow forever. But this is really the principle. All in the case of the heat equation, all what the profile can create is nothing but eigenvalues to a problem which essentially is completely explicit. And of course, the beauty of this is that in fact, it's not really spectral analysis. You can do everything by hand. It's all about energy estimate. You can understand everything in terms of energy estimate. And this makes this analysis so nice. In particular, it's been propagated to more complicated problem where maybe your potential depend on time, type 2 problem, et cetera. Because this is just for the heat equation energy estimate if you do not need any sort of abstract spectral machine. OK, so this is the heat equation. So of course, let me say two things. If you do an analysis equation, if you do a Schrodinger equation, well, it's a different world. So it's really different. Put me an i. If you put an i here, yourself a jointness, et cetera, disappears immediately. It's a different universe. It's not a small problem. It's a huge problem. And there are many different groups that have developed various techniques to go around this difficulty. None of them is simple. And in some sense, I don't think we still have a fair global picture of what we should or should not do. It's certainly made tons of progress. But there are a few things missing there. But now we don't have a thing. So this is the good thing of going to these variables here. We don't have a Schrodinger equation anymore. We have a wave equation. So we have a wave equation. And for the wave equation, so there is a point here. So now I'm looking at this, and I'm looking at the wave equation. And I want to try to get estimates. So I want to derive estimates for my wave equation. So maybe I should, I think I've written this somewhere here just to make my algebra right here. So this is, this is. OK, so you can, if you want, you can find, so of course, there are two kinds of things you can do. The first kind of thing you can do is you can say, well, for my wave equation, of course, let's just let me say the following words. Let me mean, because there is going to be a huge difference. So let me take the true wave equation. Let me say that I started from dt square u is Laplace u plus u u to the p minus 1. OK? Really this one, not this one. Let's mean that I really started from there. Let's do your renormalization. You renormalize exactly like you would do for the heat equation. And then you would get an equation for the renormalized flow. You care about the sine there or not? The plus here? Yeah, so it should be the focusing one. Yeah, so it is, right? But when I care for is the structure of the linear operator. So if you want to, you can, you can really, we can remove this here. It's not the problem. I'm just trying to say, if I renormalize this linear operator, what kind of things do I get? I'm going to get something that looks like this, but not quite. So let's just do this computation. So this is something you will find. This is how. So there is this paper by Antonini and Merle, back at the beginning of 2000. And of course, there is this amazing series of work by Merle and Doug, which is all about this. So the first thing they do, they renormalize. And they tell you what the renormalized equation should look like. And the way they write it down is exactly like a wave equation. So I'm going to do the same thing. So they tell you they have something like this. So their variable is omega. So they have something like this. They have L of omega minus twice p plus 1 over p minus 1 squared omega. And they have plus omega, omega to the p minus 1. And they are a bunch of linear terms. So they have something like this. So they have minus p plus 3 over p minus 1 d tau omega. And then they have minus twice. So the renormalized variable is y. So they have dy squared. And this is d tau omega. And actually, L is exactly so you should think of it. So I'm going to write L in a different way in a second. But you should think of the following L. If I think of the way it's written here, you should think it's exactly this. It's 1 minus y squared dy squared omega plus a1 dy. So this is L of omega. And a1 can be computed, et cetera. And I'm just recalling this is a one-dimensional computation. You can do this computation in a more general case. But this is what I want to see so far. So what's going on here? This is just algebra. I just renormalized with the scaling of the equation. And I get an operator, which is exactly like this one over here in the following sense. You see, the highest number of derivative in space is this guy. So it's two derivatives. But it degenerates when y is equal to 2, 2, 2, 1. And then I have a bunch of cross terms. So I have two derivatives. I have other two derivatives terms. So they have cross terms in y and tau. And I have other, maybe, order 1, order 0, and order 1 operator. So it has exactly the same structure like what I have over there. And of course, you know exactly in the case of the wave equation, you know exactly what's going on here. What's going on is that my renormalization, so the way it works, typically, I would say that u is 1 over lambda. So this would be, I suppose, 1 over p minus 1 omega of tau n y. And y is x over lambda, something like this. So you know exactly n lambda in the case of the wave equation. Of course, lambda is just going to be t minus t. There is no square root, because it's the wave. It's the scaling of time that changes. So what does it mean? It means that y equal 1, which is where my operator after renormalization degenerates, is nothing but the light code. It's just x is t minus t. So if I draw, I can draw my light code here to this x. This is time. And so it's either way around. So I can put time. I can put big t here. I'm just drawing a backward light code from the expected singularity. And what I'm saying is that I see my operator in some sense that degenerates when I am on the light code. That is what x is t minus t. So you should think of it in the following way. You should think that this is exactly like this. There is one big difference. Here, the function q, it's written here. So because this is like y squared, so it's like the function q is constant. So in this case, it's like I'm having exactly this, except that q is constant. In my case, q is going to depend on time because it's created by my profile. It's going to depend on space. I'm sorry. It has a shape. And this changes everything. So you should think that in the case of the semi-linear wave equation, I have exactly this function here. And then you have an algebra that is specific to this function here. What I'm trying to say is that when I linearize close to my profile, I don't have 1 minus y squared. I have something more complicated. And the fact that this function is not exactly this one, and actually what this function does is going to be essential for the analysis. So let's next. Is this the effect of the quasi-linearity? Absolutely. Totally an effect of the quasi-linearity of the equation. The fact that I have something that comes in front of the second order derivative that depends on the profile, it's a quasi-linear effect. It never happens if you have a semi-linear problem. So actually the cancellation, that term there, is 0 exactly on the corresponding light. Exactly. That's the light. This is exactly how we think about it. We call it the light cone. So I suppose if you were to do, it's more the sound cone. This should be more appropriate. But of course, we have a sound cone. But of course, remember, we linearize close to a given profile. So we know to leading order what our sound cone is supposed to be. Because typically it depends on the solution itself, but I'm trying to bootstrap that I will stay close to a given one. So this is why in some sense, at least to leading order, it's given to me. But what I'm saying is that if I do this for the semi-linear problem, I have a similar structure. I have this degeneracy of derivatives. But the algebra of this function is very special. I can immediately say, why is the algebra of this function very special? It's very clear. Simply because so I can, this is exactly the point. For this problem, so now I have this equation, and you ask me to estimate, remove the nonlinear term. Just look at the linear operator. You're trying to understand what the linear operator does. And you ask me to estimate for you. Well, if it's really this equation, if it's this operator, that I know that it comes from the wave equation above. So it means that I have control of sobolef norm for you above, which I can translate exactly like here into decay of sobolef norm for the operator here. So at the linear level, if I have exactly this equation, I remove the nonlinear term. Of course, if I take sufficiently many derivatives, I will find that I have exponential decay of this kind of norm. You have to be a little bit careful. There is something unpleasant from scratch. When I talk about sobolef norms, maybe I should do this here. There will be something funny. It's always the same structure. You have to be a little bit careful here. There is something unpleasant with the wave equation, which you need to understand. A typical sobolef norm, for example, energy is dT u plus grad u squared. So this is over rd, and this is in space. So of course, space scales well, but dT doesn't like it. It's going to give you norms, very funny identities. Like you're going to have something like d tau plus lambda z, or lambda y. Maybe this is a normalizer called omega plus et cetera. So you have to be very careful. This is where you understand that some degeneracy can happen. You have what dT, in original variable, transforms to, after renormalization, is something funny. And this is a difficulty that, of course, does not show up for the heat equation. So you have to be a bit careful. OK, so my claim is the following. So if I have a semi-linear wave equation, then this is the structure I have. And if it's the semi-linear equation, there is a way to get, if you want to stabilize a self-similar solution or run this kind of program for this kind of equation, this is something you can do essentially using only sobolef norm. This is something that my student Ray and Kim did very nicely. So there is really a machinery that just is something like this, which you can run. OK, so this is the first thing. The second thing is that what we learned from, this is something that started with Antonini and Merle, and then is used systematically in the Merzac series of paper. So what Atem and Frong did is that they said, but for there is a similar, there is something similar to the giga-con functional. There is a way to get estimates. And what you will find in the paper is exactly the following thing. Where they tell you that they can realize, they simply said, well, this operator L, which sees the highest number of derivatives in space, and maybe the first order term, we can think of it as being 1 over rho. So this is a 1D computation. This can be J. I'm just writing the 1D one. So they tell you that this is 1 minus y squared dy omega. So you have some sort of rho, it's not a Gaussian weight anymore. It's 1 minus y squared to the 2 over p minus 1. So maybe this is 1D. So I'm just going to write y. So rho is a given weight. And of course, when you have things like this, then you get a natural energy identity, which you're going to have the following things. So you have d d tau of the quantities that typically look like this. So this is the integral in the cone. So for y less than 1, so you have 1 half d tau omega squared plus 1 half 1 minus y squared dy omega squared. And then you have plus p plus 1 over p minus 1 squared. You have omega squared minus 1 over p plus 1 omega to the p plus 1. This is equal to minus blah, blah, blah. So minus I have algebra. Let's ignore constants, but I have something I have. I have ds y squared, I have d tau. So this is d tau, d tau, d tau. This is d tau squared. And I have this weight divided by 1 minus y squared. And the whole thing is dy. So in your favorite language, this is just the vector field against the suitable vector field. And you do whatever you have to do. And you get that some identity gives you a sign. Something is pushing for you. So what's going on here? There are two things that are going on here. Of course, you recognize something similar to the gigacon. In some sense, the structure is similar. Of course, I should put weights. There are weights. This whole thing, the space, there is a row dy. Everything is row dy, right? Because otherwise, you are completely dead. OK, so what's going on here? This is a control. This is a machinery to get control of the solution in the cone. So this is really inside. Can I get some boundary terms? Observe over the weight. So because the weight, this is an excellent point, the weight is 1 minus y squared. It cancels. So there is no boundary point. So there is no boundary term for this computation. Of course, you could produce a more general computation with boundary terms if you wanted to. But this is the algebra. And you see, there is a point here, which is exactly the key point. So this is the one-dimensional computation. And this row comes with an exponent here, which of course has to do with the scaling of the equation. So if I was to do this computation in the general case, if you give me a general nonlinear weight equation, you will normalize. And you try to do this. What you will discover is that this exponent here, you should really think that this exponent here, it's an alpha of d and p. And of course, it depends on scaling. As a matter of fact, in the setting where they were, the alpha was positive. Because it's a subcritical, it's a conformal subcritical case. But there may very well be cases where this alpha is negative. This is when Frock and Carlos studied the energy critical wave equation. This was exactly the situation. Alpha was negative. So alpha is dictated by the scaling of the equation. And of course, you have to understand, this is exactly, as you said, Sergio. If alpha is negative, look at this term here. I have rho divided by 1 minus y squared. So if alpha is negative, I will pick up boundary terms. Another way of saying this is, I cannot do this integration by parts in the sense that, generically, this is singular on the code. There is no reason why I should be able to integrate by parts. So you give me a wave equation, you renormalize it, and you ask yourself, can I produce this kind of identities? The answer depends on where the scaling lies. And of course, I should say that this is just one identity. There are tons of identities. But this question of, what kind of control does it give us? How do we control the solution in the cone? And how the degeneracy that shows up here on the cone enter the analysis? It's the heart of the Merle-Zag analysis. And on everything, they tell you about what solution should be looking like. So it doesn't work for higher dimensions? So certainly in the, so you have to, so then you have to talk to Franck and Atem. But think of it, there are two kinds of things they do. They produce examples of this and this kind of solutions, which is something very general. But they also give you classification results on what should happen or not. And whether this can be propagated to higher dimension, et cetera, it depends on what kind of results you want to know. The strategy certainly works very well in the radial case. But now there are classification results, things like the only self-similar solution to this or this is only given by this or that. This kind of rigidity theorem is not known always. So there may be other things. So it depends. But the principle, the way I'm going to use this big picture, I don't have a classification problem. I have a linear problem. So all I want to do is try to use these two ideas, sobolef estimate, or maybe this kind of norm to try and get estimates on my problem there. So let's try to do that. So of course, the first thing we tried, you understand? It's this. That's exactly this. The first thing you feel like trying, so now I want to go back to my problem. So I have exactly this. But the big difference is that now I don't have exactly this weight 1 minus y squared. I have this function here, which depends on z. So my normalized variable is z here. And of course, it has a shape. It's not exactly given to me by this. So the first thing you would like to do, the first maybe stupid idea, first idea, is to try and run sobolef norms. So you know what we're going to try to do? You're simply going to say, well, I mean, I'm going to try to propagate. I'm going to try to compute the variation. I want to compute something like, I don't know. So maybe this is the integral of rd. So I want to propagate something like d tau. So how did I call my variable? I call it phi. So maybe d tau phi. Maybe this is phi k, where I decide that phi k have commuted my flow with derivatives. And maybe, you know, nabla phi k squared, something like this. And I want to try to see what this thing is equal to. And what I expect, remember, for the wave equation, I know what the answer to this thing is. I know exactly what I should expect. I should expect something like minus 2k minus sc times the same norm, times the same thing. That is the integral of d tau phi k squared plus nabla phi k plus r at the top. So what I mean is that exponential decay after renormalization is simply the fact that my differential equation is going to come like this. So this is just, so you should think, what does it mean? It means that if you let your equation, if you let derivatives go through your equation, you compute the principal part. You compute sub-left norm on the principal part. And you will see a very good term arise for you. It's here to help you, which is the machine to get exponential decay. So you say, OK, fine, I'm going to do that. And you understand, if you do that for exactly the sum in your wave equation, you see, if you start commuting derivatives, derivatives will hit here. But there is a miracle because it's the function 1 minus y squared. You have constellations and et cetera. Of course, it has to be. It comes from the problem above. But this guy is your friend. If you do this on my problem here, you start commuting derivatives. Then you will start seeing this guy in the picture. And this guy will not be friendly at all. What this guy is going to give you, I can tell you, we've done this computation 10 times. What you're going to get is something like this. You're going to get, maybe there are constants that depend on k, but you're going to get the quadratic form. You're going to get something like a coefficient d tau phi squared. You're going to get, with a coefficient a1, you're going to get the cross term d tau d z phi k. And you're going to get a term like a3 d z phi k squared. You get a full quadratic form. And of course, a1, a2, a3, they depend on my profile. They depend on this thing here. So remember, you have no formula on your profile. So you form the discriminant of this and you start asking yourself, can it be that this thing is going to have the right sign? You look at what your profile looks like and the answer is no. It certainly is not globally true. This is not going to work. I mean, if you just do the stupid thing that you would like to do in the case of just the semi linear wave equation, this is not going to work. It's because my profile q, it's because of the presence of the q profile. Does it mean that somehow you have to differentiate according to the genomes for the profile? Well, you tell me. Maybe. As always, you shouldn't use the standard coordinates. Just use coordinates. Well, you know, whichever. I'm not aware of, I'm not aware of, you know. With this read all the time in relativities. Yes, but we do have some expert of general relativity in our group too. They seem to have an idea of what should be done there. Well, I mean, I don't know. Maybe changing variables could be at the end of the day. I don't know. I like the idea that there is a simple universal brute force way of doing things because it's some sense it's a machine to do whatever you want, right? Now, is there, you know, interestingly enough, as I said, some expert of field mechanics tried to revisit this using Riemann invariance, a sort of more refined structure based specifically on the flow, which, you know, I don't think it changes fundamentally things. But it sort of saw some structure there if you want. But at the end of the day, you know, I like the fact that there is a simple universal way to deal with things. It doesn't mean that maybe, you know, something finer. In particular, I will speak about eigenvalues later. Maybe there's a way to be a little bit more, you know, sharp on what's going on there, OK? But let's say that you want to do something, you know, in brute force, if you do sublif in brute force, that's going to send you to heaven, OK? That is for sure. You can't just work in the problem like this. It will not work. This is the first thing. So the second thing you want to do, so second strategy is to try and build, you know, try to control local norms in the code. And then try to adapt this kind of Merlzag type of estimate. And then you know what you have to do. So if you want to do that, it's very clear. Remember, I told you that if you want what we call the measure that is if you want this row to vanish on the cone, for a given problem, it depends on scaling, OK? So I'm certainly above energy critical. It's hopeless to think that you can control your flow there. So the first thing you want to start doing is you let derivatives go through your equation, OK? So you commute the flow. You commute with derivatives, OK? So if you call phi k, which is Laplace k phi, you should think that you will get a wave equation for phi k. So you've got d tau square phi k. And of course, you start commuting with derivative, and you know what's going on. I mean, this term is your friend. He's here to help. This term is your friend. He's here to help. This is the scaling term. And this guy is a mess. It's not very friendly. So you do whatever you have to do. And what you say is the following thing. You say the principal part of the operator, I can write it exactly with a measure. So maybe g depends on k, l g k of phi k. And then I have all these other terms that are going to show up. I have all plus all other terms. So in particular, I have cross terms. I have what I call twice h to lambda d tau phi k, et cetera. So I have a whole bunch of terms. And of course, everything that comes in here, all the coefficients come with k dependence, where k is the number of derivatives. OK, but you can certainly do that. You can decide that you want to commute with derivatives. And you try to realize your linearized operator with the degenerate part. You try to write it exactly as a measure. So actually more specifically, so what you can do. So this is going to work. There is a unique way to do that. Maybe I'm going to erase that. So what you find is that you can realize the typical kind of structure that you have is the following thing. So you will find that L, maybe this is g of k phi, maybe of phi k, actually, it's going to be something like this. It's 1 over g index k. So I have, of course, my radial measure, z to the d minus 1. I have dz. And I have z to the d minus 1. I have z squared. I have my means of g in this k. And I have minus delta. And I have dz phi k. So this is the typical kind of thing that I should get. So you should think that delta is the delta. It's my sonic line, omega minus y squared minus sigma squared. So you should think if I didn't erase the Merl-Zeig, it's exactly what I'm seeing here. That is I have a measure, which in this case would be rho. But it's not only just a measure. It's also the fact that I have this extra degeneracy here, which is exactly the analog of this thing here. So this depends on my profile. And delta is really delta of the profile. So I'm going to have the exact same thing that delta degenerates at p2. So there is a unique way. Again, you commute your flow. You look at the highest number of derivatives. You try to compute the measure. And you will see that there is a unique. There is a unique g of k. So you compute g of k. But I think you compute the plus and the negative of k. Yeah, exactly. I take derivatives. So now the question you should ask me, think of it this way. If I do, I know, again, let's go back to the semi-linear wave equation. If I do this computation, I know exactly what's going to happen. What is going to happen is that as I take derivative, I will shift the principal part of my operator. It's pushing for me. And if I take sufficiently many of them, I know that whatever exponent showed up here is going to become positive. So maybe I started. So typically, for example, Franck and Atten, they started with typically in 1D, you're never energy supercritical. So their alpha always has the good sign. If p is not too large. But if you start h1 critical, and if you try to control h1 norm, you're going to have the wrong sign. But if you take more derivatives, you should shift it like that. You will have the right sign. So the question is the following. And I want this to be positive. Because if it's not, this is what we said. Either I'm going to get boundary terms when I integrate by parts, or I'm going to ask my functions to vanish on the cone, and they have no reason to. So the function space gets completely different here. So I'm asking this, I'm asking the following. You compute gk, and you ask yourself, that's the key question. How many derivatives do I need to make sure that at p2, which is the degenerate point, what I want, I want gk of z as I arrive in p2. I want to be like z minus z2, which is the p2 point to the power alpha. And I want alpha to be positive. So alpha depends on k, which is the smallest k when I commute, which is the smallest k, such that actually g vanishes on the light cone. Because if it does not, it's a different world. Well, so you do your algebra, and you understand that everything depends on your profile queue. So everything depends if I draw, maybe I'm going to erase that. If I draw my face portrait again, briefly. Now we need to be a little bit more precise here. So let me redraw my face. So this is a question about the profile. Because it has nothing to do. This is something, and this is a quasi-linear question. This question does not arise if I have a semi-linear way of the question. Because this does not depend on the self-similar solution. It's given to me. But this, in this case, it certainly does. So this is a quasi-linear question. So if I draw my face portrait again, briefly. So this is sigma omega. So I have this and this. So I know that I have this. And I know that my trajectory is going to come here. And it's going to go back down. So my problem is here. P2, this is my sound cone or light cone, if I want to make the analogy with the wave equation. So now I need to ask myself, how many derivatives do I need? So it all depends on how your profile behaves. It depends on the behavior of the profile at P2. And in particular, what you discover is that, so in fact, k depends, if you want k mean, in fact, it's related. It depends on r minus r star, where r star is this weird value, d plus l over l plus quantity. And as a matter of fact, the closer you take r from r star, the larger you need k mean. There is this is really algebra. You realize that there is some consolation here. So now I need to be more specific. So at first hand, you say, OK, fine. I'm going to be able to do that. But now I need to be. So k minimum is a minimum number of derivatives. So what I want, I'm asking the following question. How many derivatives do I need so that when I compute the measure, I want the measure to vanish on the light cone? Because if it does not, initially, say you work with h1. It's an h. I don't know how many supercritical problems. This is not going to work at all. Initially, it's very bad. Alpha is minus 10. So alpha k is the limit. But you know that when you take derivatives, which in some sense the problem remembers if you want simply the simple semilinear structure, it's just that taking derivative is good for you. It's a machine. It will push in the right direction. So when you compute this number, the bigger the k, the best you are. And as a matter of fact, you can compute this number. You can compute exactly how this number depends on your profile. But then I have bad news. My bad news is the following. I'm taking derivative of the equation. My profile is in the picture. You have to be very careful. I told you that my profile at p2 is not infinity. And my profile has a problem at p2. Careful. My profile at p2, there's an issue. As I said, so let me draw this again. So maybe I should. So I'm arrived. So let's say I'm arriving. I'll draw this again. So I have this, and I have this. And what I said the other day is that at p2, you should think of it this way. I don't know if you wrote p of z. And so p is the profile. So at p2, what you have is the following thing. You have a universal expansion. Some for k equals 0 to bk of some number ck z to the minus z to choose k. Press constant z to the gamma. Press a remainder, I'm sorry, this is z minus z to the gamma. Press remainder, which depends on z. And where I said that gamma, you should think of it this way, it's 1 over r star minus r. And k is the integer part of gamma. OK, so what does it mean? It means that, remember, this phase portrait, they are indexed on this parameter r. I want to be in the range r less than r star, because otherwise it's not the right picture. And what I'm saying is that when r is less than r star, if I look at any of the integral curves that enter p2, except an exceptional one, there is all the integral curves, they share the same universal expansion. These numbers do not depend on the trajectory. And the Cauchy problem shoots on this thing here. So what does it mean? It means that any trajectory that crosses, if you want, comes with a number like this. And in particular, this guy always assumed that gamma is not an integer, because otherwise it's even worse. So if gamma is not an integer, so this index is for trajectories. So this means that this guy has limited regularity. I cannot take more than exactly k derivatives. So now you do your algebra. And what I mean, my problem is that k mean, at least that's what we find, is k plus 1. There is nothing I can do if you want things to push for you. The number of derivatives that we need to take, so that if you want the operator, spectrally speaking, is correct. That is, I need this alpha to be positive. Unfortunately, it's exactly equal to the regularity of my profile plus 1. That's what we find. So maybe there is another way to spectrally realize the operator. But if this is what you want to do, this is a fact. This is a computation. You just sit down and do this computation. So what does it mean? It means that this is very frustrating, because you could very well send r to r star. r is free. So you can have as much regularity as you want. That is the regularity of this guy, this gamma, can be huge. Nevertheless, whatever you do, you need to be above. And I think this is a very quasi linear phenomenon. It's telling you that the solution carries its own scaling, which is the speed r. It's exactly this thing. And everything that happens on the light cone is exactly related to this computation here. So what does it mean? It means that we are not able to not think that this is end of story. I don't know. But we are not able to realize this linearized operator in a decent spectral way. If what we're working with is a generic search graph. I don't know what to do. My problem is here. The only hope. OK, so maybe now I can erase. No, maybe I should keep this equation. I keep it here. Yeah, I can erase that. So let me look at that. Excellent. So my only hope. So now I need to be, I'm sorry, I need to take a minute to redraw this thing. So let's be a little bit more careful. So I need to draw this. So I have, I redraw my face portrait. So this is sigma. This is omega. This is this. This is this. So this is delta equals 0. Then I have something here, which is a mass. Hop. So I have a unique trajectory. That's the one I'm interested in. So here I have a slope. So I have a unique trajectory that comes from infinity, from over there. And then what I want, I want it to go here. OK, so my point is the following. If gamma, so gamma is one of r star minus r. So it's positive. And in principle, it can be taken huge if I want to. So I'm asking the following question. So obviously, so let me put it this way. If I take any of the curve that goes through P2, then I have this limited regularity. It kills me for spectral analysis. So I'm asking the question, however, but the point is the following. Passing through P2, there is always a unique solution that's infinity. I just take c equals 0. My expansion there, remember, this c, it's the Cauchy problem for trajectories. It indexes the trajectory. So I just take c equals 0. But it's a unique one. So the question is, can I choose r, which is my only free parameter, such that the unique c infinity solution that crosses here coincides with the unique solution that comes from infinity. So can I choose r? Question, can I choose r such that c, if you want to 0? Which means that the solution is c infinity when passing through P2. If I can do that, in some sense, I don't care for c infinity. I just care for k plus 1. But that's the only way we found. Now, if I can do whatever I want with my number of derivative. But this is the question. Can you choose r such that, in fact, the unique guy that comes from there is going to coincide with the c infinity solution that passes through P2? And of course, then you need to let me remind you that when you pass this point, then everything can happen to you. You can go there. That's bad. But that's an exceptional trajectory. Or then you flip a coin 50-50. You cross green, or you cross red. Either you go up, or you go down. So not only do I want to be the c infinity solution, I want to cross red. So this is the question. So maybe let me just take five minutes. So say I can. And let me close, because my questions are here exactly. So say I can. Say the answer is yes. So assume, yes. So pretend that you've done that. You've been able to find r such that, in fact, there is a miracle. When you pass through P2, you are as smooth as you want. So you can do this. You can take as many derivatives as you want. And then you will discover that indeed, once you've taken sufficiently many of them, then you do get an estimate, at least for the highest number of derivatives in your equation. That is, when you commute with k derivatives, you can realize your operator with a measure that now vanishes on the code. Then what you will find, if you just keep higher order derivatives, actually, it's a little bit more tricky than that. Let me say that roughly. What you should expect is a Melzag type of monotonicity formula. So the expectation is typically something like this. So remember, did I erase? Yes, I erased it. So you will have something like that. You will have d d tau of some quantity, something that is going to see space derivative. So maybe this is z minus z to now, z minus z to some power, alpha. And maybe it's degenerate. Actually, you have an alpha plus 1. And you control space derivative of phi. And then maybe time derivative of phi. So this is phi k. This is where phi k is Laplace k phi. So maybe you control a large number of derivative. And this is dz. And you will find that it's equal to what? Well, exactly. If you remember in the Melzag, you had something like this. Well, you had, so you just have rho times something. So you have some function, I don't know, a6, which depends on the profile, times d tau phi squared. And maybe this is divided by z minus z to times the same kind of thing, z minus z to the alpha, dz. This is what you should expect. Something like this. That is you can produce some sort of a vector field that is going to give you this kind of identity. But then there is something that is not for you to choose. It's given to you if you have the semi-linear heat equation, but here it's not given to you. These kind of quantities, again, you should think that you do your algebra. These are functions that depend on the profile. So you need a sign. I need a sign. I need repulsivity. Something has to push for me. It's not given to me at all. It's whatever it is. And of course, there is no formula for this guy. So this is unfriendly. So the sign has to come from the ODE of the profile itself, and this is what we check. Yes. This is what I call repulsivity. That is the fact that, indeed, when you compute this kind of. This is what you want. The principal part, if you want your vector fields, something is kicked away, and it gives you some decay. The correct spectral framework of this, of course, is the framework of maximal activity. So what you've done here, you've simply said, this is the framework of maximal activity. What you've simply done is that you've realized if you write your wave equation, so you can write it as a function, you have something like theta of phi, theta of phi. This is your variable x. You've written it as mx. So you want to solve this plus non-linear x. And what I'm saying is that what I'm doing here is I'm producing a norm such that when I compute the scalar product of mxx with this norm, and its norm, this scalar product, if you want, corresponds to taking derivatives of my equation. And of course, with this kind of measure, then what I'm going to discover is that this is bigger than some constant with my dependent k times the norm of x itself. So of course, this is never going to be true. This is never going to be true. What I should think, so maybe this is an m tilde. So this would be maximal activity, but what I can say is that I can realize m tilde as being m plus a is typically generated by perturbation generated by the profile. So a is a compact perturbation event. If you want compact perturbation in my Hilbert space, hk. This is what I told you for the heat equation. I mean, of course, if you add the profile, then it can create eigenvalues. So the machinery here is that this is a machine. What this thing does is that it makes create finitely many eigenvalues. So this is what this machinery is about. So what this is telling you is that if you take a norm with sufficiently many derivatives, you can form a Hilbert space and realize your linear operator as being a compact. And actually, it's a finite rank perturbation of an operator for which this kind of reproducibility estimate is going to mean maximal activity. And then you are in a standard framework where you know that modulo finitely many eigenvalues, which you need to mod out dynamically, then you have a spectral gap. And your spectral gap is exponential decay. So this is the machine to get the abstract machinery to get decay. So now there are two things that remain to be done. There are two things that, and I should say that. Let me say that. So this is an abstract spectral framework of maximal activity. And I should say that in the context of the wave equations, the first who were to actually use explicitly this thing, this was in this work of Dunninger and Sorkuber. They were the first one to actually use this. So this was for a familiar wave equation. So this was back in 2008 or something like this. But they certainly have a point that this kind of abstract machinery can be used in this contest. And because of the quasi-linear structure, it's important. So what is left to be done? So let's just take two minutes. What is left to be done? So what have we done here? What we've done here is that we've understood the machine. So if this is y, if this is the origin, if this is, I don't know, maybe this is z. I call it z. So this is z2. This is my light code. Then maybe we've computed with an exponential decay here. Exponential decay time. Modulo, finitely, maybe eigenvalues. And we've learned how to do that. You can mod them out. There are various ways in which you can do this. So you have three problems. Of course, problem one, you need to understand. It's exactly like for the heat equation. Controlling your flow here is certainly not going to be enough. I want to control my flow everywhere because I have perturbations, et cetera. I cannot leave just by an estimate in the code. So the first thing I need to do, I need to pass the code. And it's a standard difficulty because my measure degenerates on the code. So the question is, how do I pass the code? So there are several ways in the literature in which this has been addressed. At the end of the day, the truth is the following. There's no code. The self-similar solution sees a given code. But the operator does not. All you need to remember is that all this business is an artifact of renormalization. You need to use the fact that, in particular, this guy here is here to help you. So some of my collaborators like to realize exactly as you would like to do it, Sergio. They like to think of it as being the flow near a horizon. And there is a redshift effect hidden in the picture. This is exactly a posterior way how you can think about this. I like to think about it by simply saying that it's just an artifact of renormalization. If you change variables, if you shift things a little bit, you will see this degeneracy disappear. And this is how we do it in the paper. So if you change a little bit of variables, you can pass the code. It's really, this is the fake. This singularity is an artifact of your change of variables. Once you've propagated estimates beyond the code, then it's a different world. You know exactly what you're doing. Then, in some sense, it becomes a wave equation. And you use exactly what I told you. Then, the machinery that didn't work because it couldn't work globally turns out to be extremely efficient here. You can simply close thing by start that so-called estimate. You see, I like to think of it. It's a propagation in space. That is, I have an estimate when z is small, I have a certain estimating time, which is some dk in time. And what I need to do as z gets bigger and bigger, I'm going to bootstrap a similar estimate. I just need to agree that I need to lose on the dk in time. But it's fine. It's given to me several of the estimates, the renormalization that I showed you. They are precisely meant to do this. This is a machine to understand. Think of it this way. If I look at what my broop solution looks like in original variable, I'm concentrating. And I'm becoming huge. So I zoomed in. And of course, this is y. This is x. And what I'm trying to say is the following thing. Of course, so maybe this is z. z is x over lambda. Think it's x over t minus t, if you want to. So of course, when I am in the cone, so in the singularity, this is like t minus t. I am with a certain size 1. So here, I have some estimating time. I look like my profile plus something very nice. But of course, if I am, say, of x of size 1, that is when z is of size 1 over t minus t, I should look in a very different way, in particular because my profile does not decay well, but I want my solution to decay well. So in fact, the kind of estimate that you have here and the kind of estimate that you have here should be very different. So what you have in the middle is simply if you want an interpolation. Whatever is here, you propagate it using scaling and the sub-left norm. And it will exactly give you an estimate in the middle. So it's all about starting somewhere and propagating the estimate. Propagating by scaling? So I mean that I told you, if you try to run control of sub-left norms and et cetera, the profile is not going to agree with it. But it's not going to agree with it because of this region. As soon as you move away, actually, if you want more standard, think of it this way. I don't know how to produce really vector fields that would give me really dissipativity in this region. I need to appeal to some abstract spectral statement. But once I have the together, then we can construct vector fields here that actually are really going to push for you energy estimates. This is an adjustment. The only point I want to make is that it's really space time. That is the amount of decay that you should expect depends on where you are. You cannot expect something uniform. Because what you're trying to describe is this. So what happens here is very different. So this is the only thing I'm trying to say. I'm trying to say that you need to propagate something. The rate of decay in time depends on where you are in space. But in some sense, this is something that we and other groups have learned how to do on other problem, the fact that its energy estimates makes things very nice. Of course, as you know very well, this is a quasi-linear problem. So of course, we have a serious problem of loss of derivative if you do an estimate stupidly. So of course, you don't want to do an agi estimate stupidly. You have to keep the full profile, et cetera. You have to do the right thing so that you don't stupidly lose derivative in the middle. It's just part of the annoying structure of the problem. But it's easily done. This is no big deal in this case. And then once you have this, of course, you need to close your nonlinear term. But in some sense, this is like all fashion grandpa mathematics. I mean, we're taking tons of derivatives in the equation. So in some sense, closing the nonlinear term in this case. Of course, you have to propagate decay. You have to check a number of things. But in some sense, it's a simple large sobolef regularity. If you want, so there is nothing. It's a huge bootstrap. And you close everything by hand. There is nothing. It's just it's all about the norms. You have to be a little bit careful. But at the end of the day, it's not a. And for NLS, there is one other thing that's a little bit annoying. You need to track that you don't vanish. Because I'm writing something in phase in modulus. I want to make sure that my modulus, my correction, is really smaller than my initial modulus. And you have to be careful about what's happening there. You have to be a little bit careful. You don't want to see a singularity because my complex number vanishes, right? I do not want this to happen. OK, but at the end of the day, this is the program. OK, so let me just maybe in the last 10 minutes just explain. So I have one problem. 15 minutes, perfect. So let me explain. OK, no, no, 10 and then a few questions. OK, so the plan. The plan is this one. So there is one question mark. There is one question remaining. So maybe I can start it over there. It's this question here. So how can I understand this problem? So this is P2. So I need to understand this. I need to understand the same infinity solution. That passes through P2. OK, and I need to ask myself. So let me say this is well done. So for example, this is omega. So if I write omega as a function of z, or maybe this is x because x was love z, maybe this is simple. So if I write omega of x, let me say this was again. I have some. Any solution that passes through P2 can be written as a sum from 0 to bk. So I have universal number ck, x minus x2 to the k, plus a constant x minus x2 to the gamma, plus higher order. And higher order, you understand the way it works. Are influenced by z, by z. OK, and again gamma, I'm sorry, is k plus alpha. And alpha is strictly between 0 and 1. OK, and again, and gamma is essentially 1 over r minus r star. So r star is given to me. It's a function of d and l. So actually, it's the other way around, sorry. It's 1 over r star minus l. OK, so this is what I want to do. So again, this is something. So I'm going to be explicit in a second. So if gamma is an integer, there is no same infinity solution. There is a log here. It's a disaster. I don't want to see this case, OK? Which is the weakness of this problem? The same infinity solution, I'm interested in, disappears when I am an integer, OK? Which is the key? So this is the first thing. The second thing is the following thing. So in general, this is a question you could ask for. So really, the way you should think of it is exactly this. So you evolve such that gamma varies from k to k plus 1. At the boundary, you know that the same infinity solution disappears. So something concentrates in a way or another. And you ask yourself, as I evolve gamma or r in between these two integers, what does the same infinity solution do? And is there a way to understand that it fact for a well chosen value of r, it's going to coincide with this guy, which is always there. This guy is always there, OK? That's the question you ask. OK, so we gave an answer to this question. So we gave the very partial answer. So we answered yes in some regime. We can do that. And the regime that we choose is when r actually goes to r star by n phi of the value. So what I claim is that there are values, the well chosen values. I can think of it this way. If you go from, if you want these are kn, kn plus 1, so you have to think that gamma n is 1 over r star minus rn. So when I evolve, so if I take n large enough, so if I'm close enough to r star, then we can find in any large, so there is an issue of parity. I think you need to be, I think k needs to be even or some or other. I never know, one of the two. But if you choose r star in this region, then you will be able to find a solution so that it works. OK, so what's the point with this value? What is special with r star? So it's very clear r star is the guy. It's a very specific value. So I said this the other time. We need r less than r star because this is the attractor p5. I want p5 at the left of p2. r star is the value for which these two guys collide. It's when p5 goes to p2. So it means the following. It means that when these two guys collide, what I have, so this is red. This is red. That's OK. This is red. So I'm just zooming in. So this is p2. This is p5. So I have a unique solution which is so everybody has the same slope here. And I know that there is a unique trajectory I'm interested in that comes from over there. It's over there. And then what I'm doing is I'm sending p5 to p2. And then the first thing you should ask me, what's the geometry? What do these slopes do? So with this thing, this is shrinks. So at the end of the day, you have the endpoint would be something like this. I suppose it's something like this. This is what you would have at the end of the r going to r star case. So you have a degenerate triple point here. OK, so what do you want to do? And maybe, OK, five minutes. It's very clear. What do you want to do? So what we will do, so we understand things in the limit when r goes to a star. So it's like a, so what does it mean? It means that I want to send, if r goes to a star, it means that I want to send gamma to infinity. So what does it mean? What I'm trying to understand is the following thing. I have a problem where all trajectories share the same Taylor expansion at the point x2. All trajectories, they have the same thing. What I'm trying to understand is what is the impact of this number here. And I'm trying to do this, and of course, more specifically, what I'm trying to understand is what happens if I do this equals 0. I really want to look at the solution so that this is 0. So what I'm trying to do is I'm really trying to compute this trajectory. And I know that this problem has a weakness. This problem has a weakness in the sense that I know that think of it this way. When you compute your Taylor coefficient, so imagine you compute the Ck Taylor coefficient, the weakness is that I know that if I have an integer, then the Cbk term, it's going to get huge. Because I know that if I'm going to have, when I compute my Taylor, so think of it this way. If you compute the Ck, so you will typically find something like this. You will find k minus, I don't know, some value k0 Ck plus 1. C maybe Ck plus 1 is equal to a certain number of times Ck plus whatever. So what you want to do, so k0 is given to you. Maybe I should speak k. Or maybe it's speak k plus alpha, something like this. So what I mean is the following thing. You see, because this is gamma. So indeed, when gamma is an integer, I cannot compute it comes out. So I cannot compute the next term. But as soon as I'm not an integer, of course, I can always compute, except that when I get close to integer, some number gets huge here. So it means that the key to this analysis is simply to say, I need to understand, I need to compute Cbk. Asymptotically, I need to compute the value of this guy. So this is as gamma goes to infinity. So you have, if you want, you have a nonlinear od. So you have something like, you have a degenerate point. So you have something like, schematically, it's something like this. It's something like you have b minus x, x omega double prime plus a certain number of times omega prime equals a nonlinear number of times omega. So b is a small parameter. b is related to r star minus r. And what you have to do is that you have to compute the, you just send in the Taylor expansion. So you have, you know, you just say that omega is the sum of the Ck, z minus, so maybe I put an x also, x to the k plus, et cetera. And then you have an induction series. You have an induction series for omega k, for Ck. And then my point is the following. My point is that, first of all, you have to put the equation in this form, so this you can do. And so you know exactly what's going on here. If you remember ODE's 101, what I'm facing here is a non-degenerate singular point. Because at x equals 0, if b is non-zero, then I'm fine. Which means that I will always be able to compute my Ck coefficient as long as I'm not an integer. Of course, it's very nonlinear. You have a nonlinear term there, right? So you have to think that something is bad. But this problem has a weakness. Because my i is like this, after renormalization, formally, when r star goes to r, this b goes to 0, so I'm seeing this. But this is a different world. This is a degenerate critical point at x equals 0, which means that, of course, the linear solution is e to the minus 1 over x. So it's not del x plus, it's jevres. It should go. So my expectation, so when b is non-zero, the Taylor coefficient of this guy that should grow like crazy, Ck, should grow like gamma to the k. That's certainly what you expect. And my point is the following thing. There is a weakness here. You see, when we first look at this, we are very worried. That's awful. I mean, so I have a formal limit problem. And when I look at regularity, jevres regularity, but this thing is growing like crazy. So my nonlinear term, when you do PDE, you have growth in the linear term. The nonlinear term should get wild. This is even worse. How am I going to put my hands onto this? I need to control this number. I need to understand what it's like. But of course, this is not the nonlinear term. We understand it. This is a convolution. This is a convolution. This is, you know, I have terms like omega square or something like this. So imagine right down the k-th Taylor coefficient of omega square. This is the sum for k1 plus k2 equals k of omega k1, omega k2. But this thing is growing like crazy. So in fact, the biggest term like this is just the first term. The other terms, they are ridiculous because this is growing like crazy. Because this problem looks very nonlinear, but actually it's not really. It's all about this convolution scale. So we have a limit problem for which we have an enormous growth of the Taylor coefficient. But precisely, this enormous growth makes the nonlinear term weaker and weaker in the sense that these terms are empty. So it's fakely nonlinear. That's something you can count. You know, all we need is our sterling. You know, you need to count. It's a bit painful, but this is what this is about. And so what's the point is that for the limit problem, when B is 0, we can understand, so there is no big k. We understand the sequence Ck. We sort of have an asymptotic. We are able to say, what do we say? We simply say for B equals 0 asymptotically, you are given. You will behave linearly. But of course, modulo a number, so you will behave. If you want Ck, you will behave like a constant times gamma to the k, roughly. And this is a nonlinear number. In a sense that, of course, this number comes from all these grid terms here. But if you could manage to say that they are, in some sense, getting weaker and weaker, it's because some series is convergent. Something is dampening. This number is exactly this as infinity there. It's a nonlinear number. We don't know how to show that it's nonzero. This is what we do on the computer. This is the check that we need to do. We need to check that for some reasonable values. This is nonzero. In some sense, it'd be nice to have an analytic proof there. But this is something, this is really just a number. This is a series. I have an induction series. I need to check that some number. I know that it converges. I need to check that it's nonzero. Then once you have this, you go back to this problem and you say, careful, I don't have b because you have b nonzero. This is analytic. So now you need to ask yourself, of course, analyticity is not this. I'm going to start dedecaying. So now what you need to understand is that, of course, if you draw k, then initially, if you want for k small or of size one, you will see the Gevrer regularity. But that's how it will start growing. But then something will saturate. So you need to understand what's the size in terms of b? What's the kb such that are you, because I'm interested in this guy, in this bk? So am I in the Gevrer region or am I in the analytic region? Am I before? So the claim is that I am in the Gevrer region. You can count this. So it means that to compute this number, so cbk, to compute this number, it's enough to know it's totally related to this number. That is, it's a perturbation of this number, which means that indeed, the Gevrer regularity, because I'm going to stop, is enough to understand what's happening here. Once you control your teleexpansion, you can do, you see, think of it this way. Now, I control this number. Now, if I control this number, I know what's going on here. Because you see, what's going to happen is the following thing. Of course, when x, so imagine that, I mean, I'm interested in the last term, c to the bk. Of course, I control my whole Taylor sequence. So think of what these guys do. This is, of course, some sort of serving classical limit. At k equal bk, this is huge. So initially, when you're on p2, this is nothing. But immediately, you will start seeing it. And what I mean by immediately is the following thing. Of course, what's the sinfinity solution? The sinfinity solution is a solution that starts with this. This is 0. And I have a higher order term. But of course, higher order now is just a fixed point, depending on this. So I can construct the sinfinity solution. Of course, it's just a fixed point. I just say correction. And I know that I can control my correction on a certain region around p2. But because I know the size of this number, I can estimate how far I can go so that I still control the correction. If you control your Taylor expansion, if you control your Taylor polynomial, you can exactly say, till when in space do I control the sinfinity solution? And what does it look like? So it means you don't control it everywhere. But you control it, and I'll be done, far enough. And what we say is the following word, is that if this is my i, what we say is that as we evolve, you should think of it this way. So let me erase, and I'll be done. So I erase this. When we evolve the parameter in between k and k plus 1, so maybe there is some boundary that's need to be avoided when we compute the sinfinity solution. So what we can say is the following thing. So there is a separatric solution, which, after realization, it's constant. So what we can say is the following thing. On one side, what you can show is that the sinfinity solution that you compute in this range, what it does is that it will touch a given neighborhood here of the separatric. So it means that, for example, if you're on one side, you will all the trajectories that will arrive here. And then just by looking at the ODE, you will discover that, in fact, all these trajectories, they must exit. So again, c0, I'm deforming the parameter here. What we see is that this sinfinity solution, they always exit. So it's either green or red. It depends on the parity of k. So for example, if it's even, they all go in that direction. So it's on the left. And this is because there is some monotonicity. It's written in the operator. But on the other side, what it's doing is more mysterious. But because we lost the monotonicity. But what you can show is that if you're here and if you're here, what you can show is that, so this will exit here and this will exit here. So it means that when I am here, I exit. When I am there, I exit. So as I deform my parameter, on the left, they all exit say green. It's a question of parity. Particularly none of them will be the separatrix. There is no way I can go up there. And in more energy, if you want, the separatrix cannot be sinfinity. But on the other side, I don't know what it does. But I know that at the end point of my deformation, I need to exit either green or red. But as a matter of fact, the curve I'm interested in, it's somewhere over here. So these oscillations, the force that there is at least during this deformation, one value where I'm going to get this one. Because I need to be in this code. I cannot go away from it. So there is at least one value of my parameter are in this interval such that I'm going to get the solution that I want. But the key to this, all this, there are two keys. The main key is to control your Taylor coefficient in what we like to call a semi-classical regime. And the key is Gevrey's regularity. Your limit problem comes with enormous growth. And this is how we control the Taylor term. The second thing is this oscillation mechanism, which is simply something you see when you try to compute for the next order. You understand what the solution looks like. And there is a difference. The left and the right do not behave the same way. And of course, it's an artifact of renormalization. There is no p5 point on the other side. The behavior is completely different. So there's a change of nature after renormalization of the problem. So oscillations on the right and no oscillations on the left. And this gives you where I want. So thank you. Thank you. Some brief comments or questions? Oh, yes, yes. Yeah, brief. So all this is really amazing through the force. Now there are lots of questions, but let's say you could be sure of just one. What would happen if you tried to do the same thing for the wave equation? Which one? I mean, what you did for technology, for technology, I'm going to answer your question. Ah, I'm sorry. So it's a very, it's a very in, yeah, it's a very, it is, you're shooting in the middle. So for the heat, you know that there is no blow up. And I think for the wave equation, well, there are several kind of things. It depends. I think, are we talking, you know, I think if we talk about the, if you take the complex heat wave, if you take it complex, then maybe this has a chance to go somewhere. If you look at the scalar, I don't know. I think it's very OK. So if you take the scalar wave equation and you do exactly what we say and you look for, you know, what I call the connection problem, that is the analog of Euler, et cetera, this seems not to go anywhere. If you do it for the complex wave, then this is very interesting. You see very interesting things show up. No, no, they should be a major difference between the two. Yeah, so I think, you know, I think that. The complex. I think the complex is a very interesting problem. It seems to make a lot of sense, yeah. But do you have some, is there any analog of the Euler equation? Yes, I'll tell you about that. I never, ever talk of things that are not written down from A to Z. You know this, you get to talk after the paper is online, never, never before. I have lots of other questions. OK, so yes. Can you just explain the main points of the booster method? To close the nonlinear problem? So bootstrap is just, you take an awful norm. So you say, I have my awful norm on phi. And you say at time 0, I'm less than 1 half. You bootstrap, you assume that you are less than 3 quarter. And this is your boot. And then you prove that you actually have a right. So this is just a minute. How can I say it's just energy method? I mean, it's just energy method. So maybe there is one thing I didn't say. So think of it this way. You have a linear problem. You have d tau x is mx plus nonlinear. And of course, there is something I forgot to tell you. I have this b square Laplace that is someone in the picture. So when I do spectral theory, I ignore this guy. It gives me estimates at low sub-level scale, right? So when I ignore this, I'm going to get dk x of tau in some h cannot norm is going to dk exponentially in time. But of course, this is when I do low sub-level. But when I do high set, then I'm going to bootstrap this. I'm going to pass for very high sub-level. And when I do high sub-level, you see this is exactly the point. Why does the problem does not like high sub-level? Because I have my profile in there. But it's compact in space. It's well localized. So I can boot with the low sub-level and put this on the left now. So I'm going to bootstrap some with constant c1, c2. So this is with b square on the left. And then I have to, all these norms are not good. I also need to get an infinity estimate, an infinity weighted estimate to close my nonlinear tau. To close the nonlinear tau. It's the standard thing. I mean, if you have c cube and you take tons of estimate on it, you take tons of derivative. What you're going to see is phi square nabla k phi. So you need to tell me something about this guy. But because you're, and of course, we do this with weights and et cetera, you have to be careful. But the point is that this is what I was trying to say. It's like old fashioned energy estimate. Taking high derivatives makes things easy. The problem is the weight. You have to be very careful with your weight in space. But I'm saying that what I did with sub-left norms, I can do it with sub-left n weights. It's just a scaling issue. So this is something we learn how to do on other problems. OK, some more questions or brief remarks? Just trying to think maybe in your mind of some way of getting rid of the singularity and c cube. Get what? So is it a p2 problem? Yeah, getting rid of the singularity. On the code, the fact that y is equal to 1 is irrelevant. Yeah, it's a change of variable. It's just, we've been thinking about this a lot. So there are various things that could have been tried. I know that Franck and Atem, when they studied the wave equation, they certainly had this kind of issue too. But they were in a semi-linear setting. So they could do things that could not be done there. At the end of the day, it's just a change of variable. It's completely elementary. It's just that it's this idea that you have to remember that your operator is not about space only. There is a space-time term. There's a second derivative, space-time term, in the picture. And it's here to help you. This is all you need to understand. And once you've understood that, once you've understood that you need to rely on this guy, then you see what you have to do. But technically, it's just a change of variable. You adapt things. So thank you for all this, Sevein, nice to meet you. Thank you.