 First of all, I would like to thank the organizers for the invitation here. I'm very honored to be at IHES and in this summer school. Maybe one word about the other affiliation, the name of the math department in Montpellier, which sounds a little bit like the name of the lab here. CNRS lab here, which is Laboratoire Alexander Grotendick. So we are Montpellier Institute, well, Alexander Grotendick, Montpellier Institute. Because, of course, Grotendick is famous for the work he has achieved here. And the institute here was actually essentially created for him. And he was initially a student at the University of Montpellier. His first year at the university was completed in Montpellier. And he finished his career in Montpellier as a full professor. He finished in the early 90s, I think. So, well, Montpellier and IHES are two important locations for Grotendick. And, well, he loved Isocahedra, the 20 facet polyhedra. And he loved to teach about that. That's what my colleagues in Montpellier taught me. Okay, now the talk. I will start with what is called here, Geometric Optics, what can be called also WKB expansion. WKB expansion usually are met for hyperbolic equations or hyperbolic systems. The idea, and that's how I will use them today, is to rule out some energy estimates. Essentially, we construct particular solutions and we test energy estimates. And these particular solutions can tell you, you know, these kinds of estimates cannot be true. So, as often in this framework, there will be a small parameter, which eventually goes to zero, so here it's named epsilon. So, and I will start with that. So, I forgot to say, this is essentially a review talk. So, I will present three kinds of results. Hopefully, there will be not too many technicalities. I will try to give a flavor of the method of the proof, but no rigorous proof, say, formal proofs only. So, what do I mean by WKB expansion in the framework of Schrodinger equations? So, every time there is a derivative in time or in space, there is a factor epsilon. And the initial data are under the form of amplitude, which does not depend on epsilon. And rapid oscillation e to the i phi not divided by epsilon. The same size, of course, complex value. The space variable belongs to begin with to Rd. And WKB expansion consists in seeking and approximate, at least formulation of the exact solution like the initial data. So, and how do we find a and phi? Well, we plug this into the equation. We order the powers of epsilon and cancel from the highest power, well, from the larger term, the lowest power, to as many terms as possible. So, the first term we get, so, is when we differentiate the phase only. So, every time we differentiate the phase, there is one over epsilon factor. But when we differentiate, we have an epsilon factor. So, it becomes of all the one. And this is the ICONOL equation, which is a Hamiltonian-Coby equation. So, if you consider the gradient of phi, it's a multi-deep Burgers equation. So, you have to expect finite time singularity. And the next term in the expansion is a transport equation for the amplitude. And the coefficients of the transport equation are given by the phase. Okay, for enough and some examples. If the initial phase is linear in X, then we have an explicit formula. So, it's linear in XT, the evolution. And there is some dispersion relation, as these physicists call it. And you see that the solution to this Hamiltonian-Coby equation is global in time. On the other hand, if you start with a quadratic phase, you still have an explicit formula that shows you that in finite time, when you can compute exactly at what time the phase becomes singular. Now, nonlinear case. So, in the talk, I will consider only de-focusing nonlinearities. So, there is a plus here. I will write the energy on another slide. So, there is no blow up in the sense of formation of singularity. And because it's nonlinear, the size in terms of epsilon here, of the initial data is an important parameter. But since I consider homogeneous nonlinearity, I can keep the same initial data as before. And then there is an extra, the coupling constant depends on epsilon. And I want to proceed as in the linear case. And this is sensible thanks to the gauge invariance. When you plug this into the nonlinearity, here you have modulus of A to the two sigma. And there is one oscillation only. So, you still have one oscillation carried by one exponential. So, it makes sense. So, let's do the same trick as before. And then, of course, the discussion will depend on the value of alpha. So, let me write the equation once and for all. And we seek c of t and x, well, approximately like, I will not be too precise about the meaning of this. And we plug it into the equation. And now, so the distinction, if alpha is positive, so the Iconol equation, the Hamiltonian acoby, doesn't see the nonlinearity yet. Well, supposedly. On the other hand, if alpha equals zero, so this will be of order one. And the left hand side with phi will also be of order one. So I will just keep this equation for the rest of the talk. That's the only thing that really matters to me. And I will rather write alpha at least one. Because as a matter of fact, I'm really cheating here. It's much more tricky than that. Okay, so with same initial data. Now the transport equation, so it's terms of order epsilon. So if alpha is larger than one, you still don't see the nonlinear term. You see it on the other hand, if alpha is equal to one and this becomes a nonlinear transport equation, which you can essentially solve. Well, I will not mention today, no. And if alpha is less than one, well, mysterious. But today, we don't pay too much attention to that. So there are two critical values, which I will use during the talk. Alpha equal one, which is the smallest value of alpha for which there is the nonlinear term in these two equations. And alpha equals zero, which is the worst case, because I wanted to consider non-negative alpha. Where everything becomes coupled, and especially here, which is hidden. So the only remark I want you to remember today is if alpha is at least one, this is the same icon only question as before. The linear one. And on the other hand, if alpha equals zero, the coupling here is so strong that even if you start with phi not equals zero. Because you have, so, at time zero, this is zero. But unless you start with the zero initial datum, this will not be zero. So you start from zero, but dt phi at time zero is given by that. So dt phi is not zero. So even if you have no oscillation at time zero, instantaneously rapid oscillation because it's phi divided by epsilon will appear. So that's all I want you to remember. Now back to NLS with no, not yet, small parameter, so defocusing. So now the two of the conserved quantities I will be interested in. So the L2 norm, the mass or power of number of particles depends on the physical context and the energy. So the energy is a sum of two positive terms. Okay, so now this equation enjoys two important invariances. So the scaling invariance. So if u is a solution then for any positive lambda, this is a solution to the same equation. And this scaling leaves the homogeneous sub left norm H s dot invariant. So forget about time here for s equal sc which is given by this formula. So maybe I can keep this formula here because not so many people use the same notation for the non-enlarity, so I keep the formula here. And the other invariant I want to mention is the Galilean invariant. So if u is a solution, then so is this quantity for every vector v of rd. And the norm which is preserved by this transform is the L2 in space norm. So and this, this is a clue that, okay, if sc is positive, you can expect some notion of well-posedness. So I don't want to give a precise definition, at least if s is larger than sc. So the notion of well-posedness I'm using is the same as Joachim in a previous talk. That is, I want local well-posedness not depending on the profile of the initial data, but on the size of the initial data only. So equality would be valid with a different notion of well-posedness. So and for sc, so sc non-negative means actually that the non-enlarity is L2 critical or L2 super critical. So the numerality means that. So any positive regularity larger than sc will give rise to a locally well-posed Cauchy problem. And if sc is negative, so if you are L2 subcritical, then to me I've shown that for every non-negative regularity, the problem is locally well-posed from HS to HS. And why do you have to assume that s is non-negative? It's because of the Galilean invariance. Okay, some result concerning the lack of well-posedness. So the first part of the talk, I will always assume that sc is positive, strictly positive. So that means once again that the non-enlarity I consider is L2 super critical, possibly energy subcritical, energy critical, energy supercritical. And I want to consider, I start from not enough regularity. So s, the HS regularity of the initial data, s is between 0 and sc. And the first result in this direction is due to the both of the supercritical, energy supercritical wave equations, so 3D wave equation, de-focusing non-linearity. And it's energy supercritical, that means that p is at least 7. So, well, I won't give the statement, but the result by Le Bo used too many properties, well, he proved too many properties in order to prove his result, and the proof was greatly simplified by Gui Metivier in a Bobacchi seminar. For non-linear Schrodinger equation, well, similar results. The statement will be probably on the next slide. It's due to Chris Collinder and Tau and Berg-Jarrad Vettkopf, and the argument I will resume on a formal level is the following. You start with concentrated initial data. So think of A0 as a compactly supported, smooth compactly supported initial data. Okay, now I have a small parameter H, which is going to zero. And the initial datum is concentrating at the origin at scale H. Now there is a renormalizing factor M, and if I want my initial data to be bounded in HM, this boils down to this inequality. And thanks to a scaling I will show later on. We can actually relate, so the non-linear Schrodinger equation without epsilon, to what happens in the geometrical optics regime is the supercritical case, meaning that here I consider alpha equals 0. So the method of the proof in these papers is the following. For a very short time, so the following, so alpha equals 0. There is an OD approximation, which is the following. It consists in neglecting the spatial derivatives here. So the approximate solution, which is considered is this one. So it seems to be a non-linear ODE, but if you have ever seen it, you know it's a linear ODE because the modulus is conserved. So this is the modulus of the initial datum. My initial datum after scaling is A0 independent of epsilon. So the solution is given explicitly by A0 by this formula. So what you see is that here you have oscillation like T over epsilon. So it gets rapidly oscillatory. So you can prove by energy estimates that the approximation of this function Pc by this phi is true for a very short time. I mean epsilon log epsilon, log epsilon to some power. So this is really short. But this is enough when T over epsilon is like log epsilon, which is unbounded. So you have become rapidly oscillatory. And then if you are rapidly oscillatory, HS norms become unbounded. So that's essentially the picture. And then I will focus on the phenomenon called norm inflation. And well, so the statement about finite loss of regularity, so that explains the first part of the title, is the following. So the cubic case sigma equal 1 was given with one proof. The defocusing higher, well, more than cubic was proven in a joint work with Thomas Alazar for defocusing non-linearity. And at the same time by Laurent Oman and his proof included allowed to consider focusing non-linearity, which is not possible in the first two papers. Actually, these three papers use three different techniques for the rigorous part I will not explain today. And there would be a fourth proof available now. OK, so the statement is following. So SC is positive, L2 supercritical, S between 0 and SC. We can find smooth, rapidly decaying, so Schwartz-class initial data, a sequence of initial data, whose HS norm goes to 0, a sequence of time going to 0, along which the nonlinear evolution blows up in HK. So the result by Chris Collender and Tao is for K equal S. So here we allow K to be strictly smaller than S. So this is why I call that loss of regularity. So because, OK, the denominator is larger than 1, so this is positive. So this is finite loss of regularity because, OK, K is bounded from below. And we'll see why you cannot expi- well, at least we know we have the L2 invariant. So we could not go below L2. So as a corollary, another way to state that is that the local Cauchy problem is not well posed from HS to HK. So that means even if you're allowed to pay some loss of regularity in your energy estimate, the loss of regularity has to be at least this one. So there will be no energy estimates for these kinds of norms. So, well, final word about the numerology is let's consider S sub in the sharp value so that HS is embedded into L2 sigma plus 2. L2 sigma plus 2 is the potential energy in the Hamiltonian. So let me write it here. So the energy, so it corresponds to that. So how do we control this guy when this is not possible with the first kinetic term that is when we are energy supercritical? So the value of S is given by that, so it's numerology. And what is this formula? This formula is exactly this with S equal S sub. So what is the previous statement with S equal S sub? So that means we are, in space dimension, at least 3. This means that we are energy supercritical. So we consider data which are small in HS sub. So in particular, the mass, which is the L2 norm, goes to 0, the energy. So S sub is larger than 1, so this term goes to 0. S sub is so that this term is controlled by the S sub norm. So both terms go to 0, so the energy goes to 0. We know it's conserved, so it will go to 0 for all time. On the other hand, the numerology, because of this formula, tells you that every norm above the energy will blow up instantaneously. And this is, of course, short because for K equal 1, we get the first term of the energy. So at least for energy supercritical equations, we know the numerology is sharp. For energy subcritical, I don't know. So and the argument is different from the argument by Lobo, but the statement is the same. And the spirit, the formal proof is the same. So I have a question. If I remember when I came in the proof of Lobo, he gets like one initial data for which he gets loss of figure left. Whereas here, you need like a sequence. So the only difference that we have is that you have finite speed of propagation for the wave. No, but well, can I ask you to ask me the question at the end? Because there is a longer answer. OK, so the formal proof is the following. So like Chris Collin.tau and Birkjar Advetkov, we consider concentrated initial data. So this is so that the hs norm is of order 1. But hs plus blows up as h goes to 0. And OK, if you want to introduce semi-classical parameter, you play with these factors. So you modify the parabolic scaling with this epsilon. So the goal is to get this with alpha equals 0. OK, so the formula that comes out is this one. And you see that epsilon and h go to 0 simultaneously exactly because you are considering a supercritical situation, s is smaller than sc. So we have this. And the initial data do not oscillate. So the relation between, well, given by scaling is that one. And the key phenomenon I wanted you to remember is this one, for a time tau of order 1. So that means we're not using this. We're using something which is much more sophisticated. And the proof, I will say a word about it in the remark, the proof is completely different. So c is epsilon oscillatory in the sense that the phase has become of order 1. So we have this epsilon oscillation here. So now, well, let's plug this into the formula. So that means for, we have this. So it's at t h square epsilon. So in h dot m will be of order, so this. So c is epsilon oscillatory. So that means when we differentiate m times, we have epsilon to the negative m. We plug the formula for epsilon, get h s minus m minus m sigma. And this power is negative exactly for m strictly larger than the formula that was in the statement. So this is where the loss of regularity comes from. Because for c, you start with something which is not oscillatory, but it becomes oscillatory, rapidly oscillatory, instantaneously. So it's at time tau of order 1, which is much larger than this epsilon log epsilon. And actually, to prove the OD approximation, you just use energy estimates in a high order suborder of spaces in short time, because eventually you invoke granular inequality. But if you want to go up to time of order 1, you need quasi-linear analysis in the focusing case or analytic setting, which also covered the focusing case, which means that for fixed epsilon, this is a, well, you solved the equation by a fixed point ideal moment. This is a perturbation of the free flow. So it's semi-linear analysis. So when introducing this semiclassical parameter epsilon with alpha equal 0, in a way, the problem has turned much more nonlinear. So to justify this one line, well, is the tough part, which I will not say more about today. OK, now, what if a side remark, essentially, what if the Laplacian is modified? So you consider a Fourier multiplier, P. The important assumption is that P is real valued. So the group, because P is real valued, the free group is unitary on every suborder of space based on L2. So essentially, if P is homogeneous, well, the scaling yields this critical index. And indeed, you can prove that there is norm inflation in the same sense as in a paper by Chris Collin de Hetaou. I'm not claiming loss of regularity. Just norm inflation. Heal poseness from Hs to Hs. And the proof is the same. So same proof. What do I mean by same? It's the approximation here. Essentially, what we do is that. So it's not P because, well, the notation are not consistent. But essentially, the proof is this one. So we approximate this by that. And OK, so there are two cases. So either P is homogeneous. So this will be really unbounded or P is bounded. And actually, the difference is what is the critical Sobolev regularity. So we know that there is, of course, local well poseness above HD over 2 because it's an algebra embedded into L infinity. And actually, when P is bounded, you can prove norm inflation, essentially by this ODE approximation, for any S strictly smaller than d over 2. And OK, so the side remark is that this first statement implies that there is no striccote, well, isotropic striccote estimate improving on Sobolev embedding. So the statement is following. So striccote and miscigal repairs are given by this algebraic relation. And Sobolev embedding tells you that OK, LQ is controlled by this Sobolev norm. But in Sobolev space, it's based on L2. The group is unitary. So we have actually equality from this line to this line. And OK, I'm considering finite time interval. So order inequality roughly gives that. So here is what is given by Sobolev estimate. So the corollary is the following. So suppose that for some finite time intervals, there is an admissible pair, a k, a real k, such that you have something like that. So remember yesterday in the talk by Nicolas, he recalled the result by Birchard-Vetkov that states that on compactly supported manifold, you can take k equal 1 over p, which is half what you get by Sobolev embedding. So there is an improvement. Here what I'm saying is that if p is bounded, the Fourier multiplier, then k, if this is true, k is larger, is at least 2 over p. So you cannot improve in such a statement. You cannot improve over Sobolev embedding. And well, indirect proof is the following. So you can solve the, suppose you have some gain. So it's really rewriting the proof by Birchard-Vetkov on the 2D manifold. So suppose you have something that improves over Sobolev embedding. That is, k is strictly smaller than 2 over p. That is, 1 over p, for instance, in the case of a compactly supported manifold. So that you have this. Then you can solve the non-linear Schrodinger equation, locally in time, with this numerology. OK. So it's well posed in HS, where S is here. Sigma is here. So the p is given by the striccard inequality. So we take sigma equal 1 to infer the corollary. So that means cubic defocusing non-linearity. And the proof, essentially, relies on a fixed point in this space. And this is where we use this striccard slide inequality. And the fact that, since we have this assumption, then we have the embedding, which is very convenient to treat non-linear terms. So it's just rewriting the proof by Birchard-Vetkov. And then, OK, suppose you had something like that, for some case, strictly smaller than 2 over p, then you could solve the problem below the hd over 2. But you know you can't. So there is no such inequality. So last part of the talk is lack of, so it will be infinite loss of regularity. So now I consider negative regularity. So we know that, so there was this threshold here. And there was the other threshold given by Galilean invariance, which is s equals 0. So now I'm below s equals 0. So there are already many results concerning the lack of local well-posedness. So in 1D, Chris Cagliander-Tau proved that there is no local well-posedness from hs to hk, provided s is negative. And this is true for any k. So no matter how near to and minus infinity k is, it's not locally well-posed. So this is already a hint of infinite loss of regularity. So there will be a more precise statement later. So it was refined by Molina in the cubic 1D case. And so I will present some generalizations in the next slide. And again, if I want to translate it into the terms of geometric optics, WKB expansion, here is the picture. We start with a semi-classical Schrodinger equation, but with an epsilon here. So this is alpha now. So from no one, I will consider only alpha equal 1. So I will just keep this. So the Iconal equation for a single phase WKB expansion is this one. So the idea is that you start from two rapid oscillation, one of which is not an oscillation at all. And actually, you can compute exactly the leading order approximation. So that means you can determine a0 and a1 of t. So this is exactly the solution to the Iconal equation with initial datum x1. I've shown the explicit formula. In that case, it's this one. And the main idea is that this is true up to a reminder of order epsilon in the Wiener algebra Fourier of L1. And why is Wiener algebra a convenient framework? Is that, OK, the name highly suggests this is an algebra. This is indeed an algebra thanks to Young's inequality, embedded into L infinity. And the nice thing is that the W norm of this guide does not see the plane oscillation. So OK, so as I said, we have a formula for a0, a1. They are coupled. And unlike the first part of the talk, which was transferred from low frequency to high frequency, the appearance of rapid oscillation. Here, when you consider negative subalive spaces, the phenomenon is transferred from high to low frequencies. So high frequency is this one because it's rapidly oscillatory. Low frequency is this one because there is no oscillation. And the idea is that you take alpha0 small, alpha1 large, and then alpha1 will feed alpha0, and a0 will be large. So that's the rough picture for this case. OK. And now, some more quantitative statements. So Chris Cologne and Tao have also proved lack of well-posedness with norm inflation in the same sense as before from HS to HS. So the target is the same as the initial space. In dimension, at least two. I will explain why two later on. The argument of Bezinaru and Tao, based on a Picard iterative scheme, makes it possible to prove rather easily the lack of well-posedness from HS to HK for every k. So like the initial case on the circle I mentioned before. So now, the result I won't mention too much today is the following. Suppose we are at least in dimension two. Sigma is a natural number. That means the non-linearity is smooth. And well, there is this upper bound, which is not very pleasant, but we don't know how to get rid of it. So if s is larger than this value, so we would expect rather s negative. But we have this value, which is at least better than negative d over 2 because d is at least 1. So this is smaller than, well, 1 over 2 sigma plus 1 is larger than 1. So and the statement is the same as before, in a way. So we have small initial data, which are smooth and rapidly decaying. So they go to 0. The sequence goes to 0. And a sequence of time going to 0, along which all, absolutely all, the sub-left norms blow up at the same time. So this is why I call this infinite loss of regularity. So you cannot expect any energy estimate involving a sub-left norm on the left-hand side and the initial data in that space. OK, so I will rather insist on the periodic case. Because there, the assumption is simply s negative. And the other, OK, so now, periodic case. So well, this is a joint work with Thomas Kepler and he's more expert than I am on the periodic case. So he noticed that the proof was valid in four-year-long space, which are generalized sub-left space. So I don't want to describe them too much. But I think that if p is 2, this is the HS sub-left space. And this is something slightly more general. So you can read the statement with p equal to. That's, I mean, that's already the main picture. So suppose d sigma is at least 2. So it turns out that that means you are L2 critical or L2 super critical. So the way it comes out in the proof is that either, so sigma is always an integer. So either sigma is 1 and more. And d is at least 2. Or in 1d, we can allow quantic non-linearity and higher. So this is just to rule out the 1d cubic, essentially. So the 1d cubic is the second case because it's less satisfactory. So pick any negative s. Then we can find a sequence of smooth initial data with, say, HS norm goes to 0. And a sequence of time going to 0, along which all the sub-left norm blow up at the same time. So it's any sub-left regularity all here. So the same as on Rd, except that here is a small satisfactory. And in the cubic 1d case, we have this abstraction that s has to be smaller than negative 2 third. And then the statement is exactly the same. So negative 2 third, we know for sure it's not sharp in order for the problem not to be locally well posed. For instance, by the result by 0, 0, 1d cubic, it proved that if s is at most negative 1 half, the same statement is true, but with R equal s. So it's norm inflation, not loss of regularity, but norm inflation. And also, there is a modification in the cubic case, 1d or higher, because it's been known since, at least, the work by Chris Collier and Tao that the 0 mode plays a crucial role in all instability issues. So there is this V-order equation that consists in removing the 0 Fourier mode from the nonlinearity. But actually, the proof does not see the difference when you remove that. So the statement remains valid. OK, so one word about multiphase. So weakly nonlinear geometric optics, weakly is because here we have an epsilon. And that means that the Iconol equation is the same as nonlinear in the linear Schrodinger equation. So weakly means that the only thing that may see the nonlinearity is not the phase, but the amplitude. So we start from possibly infinitely many linear phases, with this small parameter epsilon. And we look for an approximate solution of the same form. So the idea is, how can we choose phi j, and how do we choose a j? So phi j, well, they will solve the Iconol equation. And since we start with linear in x phases, we have an explicit formula, which is recalled here. So we know the phi j. So the goal is to determine the evolution of the amplitude hj. And eventually, the picture is the following, is that we will start with alpha 0 equals 0, so no 0 Fourier mode. And by nonlinear interactions of nonzero Fourier mode, the 0 Fourier mode will appear instantaneously. So there will be a transfer from high frequency to low frequency, the lowest possible frequency, which is 0 mode. OK, when you plug such an expansion into the equation, you have to look at the nonlinearity. So every time you see a psi here, you plug this. So you have many sums, 2 sigma plus 1 sums. And the phases you get, because of gauge invariant, the new phases you get is this one. And since each of these terms in linear in tx, the outcome is linear in tx. So it will be of this form. And what we will call resonance, and I think you will hear about that later on during this summer school, is if the outcome that you get by nonlinear interaction is also a solution to this icon only equation. And what physicists call dispersion relation is exactly that omega is length of j squared over 2. And the natural definition of resonance sets is, so this is what you get by combinations of such phases. So let's say that phi j will be kj.x minus kj squared over 2t. So you take linear combinations of this like above. So it's kj. So the linear combination in x will give you this relation. So we have j.x. And we want this to be a solution to the icon only equation, that is the resonance condition is satisfied, which is given by this other algebraic relation. So the resonance set is this one. And the claim is that only this resonance set matters. Because if you have the first relation, but not the second relation, because we work with integers, the non-stationary phase argument tells you that in the equation, when you plug this into the equation, it gives rise to a source term, which will be of order epsilon squared. Because here we have epsilon. And then energy estimates in Binner algebra, for instance, tells you that this gives rise to a reminder of order epsilon. So only this resonance set matters. In the cubic case, so in the cubic case we have only three terms there, the description of the resonance set is given in the paper by Collion de Kiel, Stafilani, Takaoka, and Tao that Giulia mentioned yesterday. And the resonance set is fully described by an algorithm based on the completion of rectangles. So you essentially use Pythagoras theorem to get this description. And for Quintic and higher non-linearities, it's more intricate. But we don't have to fully describe, because as I said before, it's enough to start from non-zero phases and create the zero phase. And that's all we want. We don't need a full description. So a remark, if you can generate a resonance in a cubic case, you can generate the same resonance by just copying the last phase for higher order non-linearities. What you can do in cubic, you can do for higher non-linearity. OK, so the key example. So which is an example of the rectangle completion is you start from these three phases. So this is why we have to assume these at least two. So the last d minus 2 coordinates, if any, can be taken to zero. They don't matter. So it's a 2D phenomenon. You start from these three phases. So x1 over epsilon, x1 plus x2 over epsilon, and x2 over epsilon. And then, of course, it's easy to check these relations. So zero is a resonance. So that means by the completion of rectangle, you start from three non-zero Fourier modes. And by non-linear interaction, you expect to create the zero Fourier mode. So transfer from high to low frequency by this example only. So the amplitude here are given in the periodic case by this relation. So you restrict to resonance set. If you didn't restrict, that would be just rewriting the non-linear Schrodinger equation in Fourier variables. So you restrict to this because that's only what matters. And so the word of caution is that it's not because you know that phases resonate that this term will actually appear. So in the 1D case, where cubic 1D case, so you have this completion of rectangles algorithm, so you have flat rectangles because it's 1D. Actually, this equation, you can see that it simplifies in a way that the modulus of each term is conserved. So if you start from zero, nowhere you will remain zero. So that's why the previous statement was bound to 2D and higher dimension. This was a multi-D phenomenon. And then, OK, now if you are at least two dimensional and consider the previous examples, well, essentially, you start from three oscillations and they will create zero mode. If the associated amplitude is the same for each of the three, you can see that the zero mode is indeed created by this equation. OK, no, if it's 1. So just the word. So once again, a non-stationary phase argument makes it possible to show that in L2 intersected with the vener algebra, this is indeed a good description of the exact solution. And well, maybe just to finish, well, it's probably too late. So the idea, well, I would just comment with hands. So you use a scaling. Scaling in a periodic case may be tricky. So that's why epsilon n is a carefully chosen sequence going to zero. Carefully means we want to remain periodic. We use scaling in order for c given by the scaling by this relation. We want c to solve this in order to use the previous approximation. So that means for u0, we consider this. We measure the hs norm of this guy. So we have this factor. And we take the hs norm. So every time, well, that means you take this power of epsilon n to the s. This negative power of epsilon n to the s. So the hs norm at time zero is of this order. So s is negative. This is this. I want this guy to go to zero. So this is one condition. And you see that any positive b, well, you can always find positive b so that this is true because s is non-zero. So there is creation of the zero mode by this algorithm, by this geometric optics approximation. So once again, you rescale time. So a time which was of order one here becomes a time going to zero in terms of u. And we can bound from below any, absolutely any sub left norm by the zero mode. So that's the only idea. And the zero mode has appeared. And in front, we have this factor. And if beta is positive, this, of course, goes to infinity. So that's it. And now it's just algebra. And we see that, essentially, because for us, when the zero mode is concerned, every sub left norm is equivalent. So that's the idea to conclude. So we know that the approximation also makes sense at the level of the solution u. So maybe one word about the quintic and higher non-inertity. So one example is enough. So Grébert and Thomas gave some examples where they wanted the first and third phase to be equal. So with Thomas, we constructed other examples based on Pythagorean triplets. So I was happy to pretend I was doing some number theory for once. And essentially, the approach is the same as before. You create, from this, you create the zero mode. So you start from five mode, and they create the zero mode. And the same picture as before. So we don't have to fully describe the non-inert thing. And the 1D cubic, so 1D cubic, as I said before, you cannot create the zero mode at leading order. So at leading order is important. So we look at the next term in the WKB expansion. The next term will be of order epsilon. And essentially, we construct it. And the outcome, everyone is tied, including the speaker. So I would say that the outcome is, if you start from non-zero mode here, the zero mode will not appear at this order, but it will appear here. So we have an explicit formula, which is nice. And then when you do the same numerology as before, if you want the initial data to be small in HS, the requirement is this one. And if you want this reminder term to blow up the evolution, you have to require this. And this is why you get the negative 2 third critical exponent by this approach. And probably by this approach, you cannot improve the statement. But it's certainly not sharp. OK, thank you. Question before. Oh, I mean, my understanding here is in the terminal, why are you using the term loss of regularity? Because it means that you start with a fixed data in space, and then at the time, you are out of that space. But here, it's not exactly. It's not exactly. It tells you that if you expect energy estimates, they will involve loss of regularity. So it's not exactly constructive. It's just that you rule out some energy estimates. And in the L2 supercritical case, you know that you have a minimum amount of regularity to be able to be ready to lose. So with this lower bound on k given by this numerology, and in negative regularity, there's no way you can expect any subalif estimate. So that's the whole picture. Just to go a little further on with this question, but you expect a situation where you have to start to perform the equation, you have to do the blow up. OK, one of your parameters might not work out enough. You will blow up as you know, but the field will blow up. Now you are getting a fixed data, you are not able to perform your calculation, but then you are able to do that. If you are energy supercritical, you can use scaling to blow up arbitrarily soon. But beside that, I don't have any answer. These types of analysis never lead to examples of type 2. Type 2 is the post-master, somehow your minds don't grow, but you still have the post-master. Not that I know, but it's a very interesting question. I've never thought of it this way.