 So, okay, what I hope I can do today is to finish this presentation of the i method with essentially everything that we need to do. After that it's going to be some computations that I won't really necessarily do for you. And then look at the modified scattering on R and I will probably only have time to comment on what changes when you want to go from R to R cross T2, which makes your equation more like a system in some sense. But, okay, and so these are, I've just put on the board stuff that we have seen last time. So the form of a formula and then what we essentially could obtain before was that we had a robust local theory for the cubic NLS on our space in every HS space as bigger than one half. And the one thing that I would like you to remember is that it was robust in the sense that really didn't depend too much on the nonlinearity, it depended on the dispersion relation to get a strict arts estimate and by linear strict arts estimate and essentially the fact that we had three copies. We had the cubic nonlinearity. And the last thing that we saw last time was the formal computation of what happens if you look at the energy not of your solution but of some modification of your solution. And it was this formula and we observed that in some cases you can get a lot of conservation there. And so if I sum it up, the situation for the moment for the cubic NLS on our space is that we have a robust protobative theory on HS with as bigger than one half and essentially what it says is that whenever you are in spaces smoother than H one half, the higher frequencies, they are going to be so small that they won't really matter and you believe that nothing is going to change too much when you go, when you look at very high frequencies provided you're in this space. And the second thing that we would like to use is the fact that there is a non-protobative ingredient which is the conservation of energy but unfortunately it's only available in H one. And so we expect things to be fairly stable in H one. And it's non-protobative in the sense that here it does matter what the exact structure of the nonlinearity is and if there was some bars here or there then this would not work as well. Okay, so we would like to use this and the question is knowing that whenever you go to, if you're in HS when S is bigger than one half, high frequencies don't matter too much then you can hope that you can push the non-protobative the non-protobative method a little bit further and at the end of the day this is what we're going to prove that indeed you can do it at least for S bigger than five over six. And so what you want to do is to change your unknown to get something where you can use the stability. And so you, and so then the question is how would you choose that so that at least you start with something in H one and then you can use those things in H one. And so the conversely what kind of, so now if you were able to control well this unknown what kind of control would that give you for your solution U? And so the trivial choice to pass from a solution in HS to a solution in H one would be to smooth it by taking S minus one derivative of it. And so in this case if you were able to control V it would tell you that you would have to be essentially below the envelope that would be like C to the one minus S. And that the four year support should be somewhere there. But what we have seen is that well this is too simple of a change of unknown and in this case we would change too much the equation we would not really be able to get good use of the conservation law. And instead the idea was to take some multiplier which is going to be one or constant for large frequency and then at some point it has too much the remainder because you're only starting with some solution in HS. And so instead of, so I will be one over this thing. So something which is a constant for large frequency and now you get a new parameter capital N which tells you for which until when you're going to not make the distinction between your solution the fact that you're measuring it in H one or that it was in HS and wherever you start controlling the tail. And so if you choose this as your multiplier then essentially the information that you hope to get is that your solution is going to be below this envelope for hopefully large time. And so then the game is that you have this parameter N and as you take it larger you expect your solution to the lower piece of it to be more and more stable. So the larger you take N the longer the time you hope to be able to propagate this information unfortunately the poorer the control you have on your solution. And so this is something that I think is worth noticing because this gives you a mechanism whereby you get global existence on your solution because we're going to take N larger and larger to get control on our, poorer and poorer control on our solution on longer and longer time scale. But in this case we won't be able to bound any known which is scale invariant which is something a little surprising knowing that at the end of the day you do get a uniform or a global in time information. And this is going to be something very different whenever we go to modified scattering where we really need to understand the asymptotic behavior of the solution and there the crux is going to be able to get a uniform bound on a scale invariant norm. So okay once these remarks are made then the mechanism for the and once we have chosen the multiplier which we saw last time after that the proof itself is just a combination of two propositions. Okay let me write them and then just comment on them. But maybe before let's just observe that what we're going to do is just work with this new unknown v. And so what is the equation satisfied by v? Starts in the same, so it's a linear operator so there we don't change anything. And so this is a priori a little surprise or a little scary because remember that i minus one is something like that so it's essentially some positive derivative, one minus s derivative. So well you see that at least if you're in the counter example for the scaling estimate so if all of those guys are at the same frequency which is very big you smooth it out by one time but then you derive it three times so you're really worse off. But of course what saves you is that now what you've gained is that you're in h1 which is a case where the cubic non-linearity is not really going to be, well you're much smoother than the h1 half that you would need for cubic non-linearity. Now still, okay so this tells you that so long as you're in regularity bigger than h1 half then you would expect this scenario to not be too bad but there is one scenario which is usually the one that is the worst case in say quasi-linear problem is if you have only one big frequency and then all of the others are small but so that you don't get too much advantage of them but then you see that if you have only one big frequency then you derive it once but you integrate it once so you're more or less essentially in the same situation as where you were if there was no i's. Okay, and so once this is done then so we're going to see so proof of theorem follows from two and so the first one tells you that this is a nice equation and if your solutions are in h1 then you can propagate this. I mean they will behave nicely and they will behave just as free solutions at least for some interval of time. So and for this it's important that you're higher than one half but then the moment you're higher than one half it's good and just to make let's give us a little bit of room here and let's start with an initial data in h1 and what we want is to say that it propagates at least well somehow that we're looking at a solution that is more or less in this part so we want to make sure that our solution is not too big and then try to see on how long of a time interval can we control it and so if n large enough and t is maybe one plus well no I guess it's homogeneous so then there exists a unique solution of this equation here which has the correct initial data and which remains in this space so that I can think of it as essentially a linear solution so the norm in the correct space would be of the same size as the size of the initial data so here the only important information are the fact that given so long as we keep some control on the size of our solution then we can get some uniform interval of time where we can essentially think of our solution as a linear solution and what do we gain from this? We gain that we have access to all of those by linear estimates because as we have seen to be able to use those by linear estimate we have to know that our solution behaves like a free solution so otherwise it's a complicated not linear constraint on the space. Okay and then the second solution, the second proposition and one last thing this proof is now really the same thing as the one that we did for the equation without the eyes and you just go through the same proof and as you do exactly as we did so you localize each of them in dyadic frequencies and then once you localize each of them in dyadic frequencies you know exactly this is either going to be one or some number and then you just look at the sum. So this is why I wanted to insist that what we had was a robust perturbative theory in HS because it goes through for those kind of modification of the equation and the second proposition is saying that well if we can think so long as we have good control on our solution then we'll have good control on the increment of the energy and it's going to be relatively small. So I'll give you the statement and then we'll see why it implies the theorem and after that the proof of the statement is really going through this formula and then keeping in mind the cancellations that we had and then after that estimating everything in a fairly naive way. So we want to be able to think of our solution as a linear solution so we want to be in the setting of the previous proposition and then in this case so long as we're looking at an interval which is sufficiently small we can estimate those terms and get some bound which is of this form. All right so now what does that says so long as you can force this to be small tells you that the difference of the energy at the final time and at the initial time is some small number times the energy at the initial time so that it's really, the energy has not really changed. Again, have you proved that? Well you just use this formula there and then you control all of the terms. And one last comment is that the moment you have that then you've won because what you're going to do is to iterate this sufficiently many times. Why? Because this tells you that your energy did not really grow. Your energy did not really grow. So in particular it's going to remain smaller than say E of IU at zero. Probably this is not true but smaller than say the upper bound that you had for the initial data for even after P iteration. Because once you've done one iteration then you've really changed it by say one over 100 and then you can continue to do it until you get two things that are comparable. But so long as you can do it, so long as you can iterate, it gives you the right assumption here so that you will be able to do it for the next interval of time. And so now you just have to keep your, to ask yourself how many times will I manage to iterate it until I reach that. And it's essentially one over this number here. And so if you choose N large enough then you can observe that this is the square root of that. So this is really the bigger one. And so how many times will you be able to iterate it? Well essentially N to the S minus one half. And now, okay so you can do it for P interval of time of size capital T. P times capital T which is going to be N S minus one half times the size of the interval of time. N minus two one minus S minus delta. And this should give you exactly the five, the six S minus five because it's what it's, so you have three S minus five halfs and then some remainder. And so you see that this, if you choose S bigger than five over six this is something positive. And so by taking N larger and larger, you get a control on your solution on longer and longer time interval. And so there is, and after that to capture the growth of the norm where you just undo this relation between T and N. Okay, so yeah, basically that's it. And those two propositions they are really just turning the crank using those, what we had seen before. So I think that's it for the i method. But again I wanted to stress the fact that, that I find surprising that it's a way that gives you global control on your solution even without having any uniform control on a scale invariant norm. But of course the price to pay is that you have no information at all about the global, the asymptotic behavior of your solutions. So now let me switch to, I think it was, a completely different part and let's see if I can link the two parts later. And this is modified scattering for the cubic analysis. And so I'm going to show you some downgrade of a proof of Cato and Positieri. But as I've told you before, there are a lot of variants of that. So you can choose the one that you prefer. However, I would say that for the moment we understand pretty well what happens for small data on some topology, which is whatever works. But they're trying to understand what happens when you take larger data is something extremely complicated. And I don't think we have a good understanding at least if you want to go to very long time. And so I think if you want to try to go in this direction, so one of the things that you want to do is to improve on the topology, the control that you need. And so there was, I think if you want to try a scaling variant, space is the best result that I know is, I think it's a master thesis of Charming Guo. But it's also related, although it's not exactly the same equation, but it's very related to work by Professor Vega, Vanika and Luis Vega, which essentially tells you that it's going to be very difficult to go to global existence large data. But at the same time, if you don't understand that, well, we're going to be very much limited. Okay, so now let's get there. And maybe before we get into the actual computation, let's get a heuristic proof of it, which is going to take us four lines. And then after that, the bulk of the work is to try to make sense of that. And the simple proof that we can get is if we assume that the form of a formula was exact. So imagine that the free evolution of your function is exactly given by the stationary phase term. So heuristic assuming, so if you remember, we had seen or it's direct to see that the cubic NLS can be rewritten in this form. And so now we're going to change variable and go to what is more stable, which I will call v of t. And this is what, and we're going to work most of the time on this. And so the advantage is that at the linear level, v is completely conserved. So the time variation of v is completely nonlinear. So in particular, you see that you've already won, because if v was of size epsilon, dT of v is going to be of size epsilon squared times the size of v. And what is this? It is e to the it dxx, e to the minus it dxxv, e to the it dx minus it dxxv, e to the minus it dxxv. All right, and now let's use the fact that we have an, well, we assume an explicit formula for this. So this becomes e to the it dxx. And now this becomes e to the ix squared over 40 over four pi it to the one half times v hat of minus x over two t. And then we have to take three copies of that. So e to the minus e to the ix squared over 40 over four pi it to the one half v hat of minus x over two t bar. And then to the ix squared over 40 four pi it to the one half v hat of minus x over two t. Okay, and in front we have this, that we'll have to hope cancel somehow. But now you see what happens. And here you see that the exact form of the nonlinearity is very important, which is that if you remember, we discussed that maybe the main term in all of this in this formula is really this oscillating phase. But now it comes once like this and once with a bar. So those two are going to cancel. And so, and in fact this is really the main cancellation because when, so let me put those two together and then the only thing that happens is that I can pull out a one over four pi t. And then I have e to the i t dxx. And I do still have one phase e to the ix squared over 40 divided by four pi i t to the one half. And now I have those v hat of minus x over two t squared v hat of minus x over two t. And that's it. And now what do I remark? Well, this is really just the form of a formula again, but not for v for something just a little different from v, and so I have one over four pi t e to the i t dxx times e to the minus i t dxx capital V where capital V hat x is little v hat x squared v hat. And so now I can cancel that and that gives me a simple ODE for v. So i dt v is equal to one over four pi t capital V or maybe in the Fourier space, it's a little easier square v hat. And now, well you see several things. First that if you were to go to higher dimensions then you would have an exponent here bigger than one half and so at the end of the day, you would have a factor here bigger than one. And so essentially no matter what this is, it's an ODE that you could integrate brutally in time. But if you're in dimension one, you get exactly one over t and this is not something you can brutally integrate in time. In fact, if you, right, so I'm not going to get it, right? I think, yeah, if you just change the time this way, you can't forget about this factor and you see that this is, you have to understand this ODE. Now this ODE appeared already in the talk by Professor Cal and he pointed out that this is a little scary a priori because it's a non-linear ODE, but in fact, no, it's a linear ODE. The moment you realize that this guy is in fact independent of time. So if it's independent of time, then you can replace it by your constant and now you can integrate it explicitly. And so you can integrate and you see that v should be equal to e to the i t, e to the i log t times v squared v. And then from there you can pass to u if you really want it. And so e to the i log t u naught. Squared. And so this is why it's called modified scattering because at the level of u, what happens is that you really have a strong linear, which is a strong dynamic, which is linear in t, but you have to correct it with, with biodynamic which is on a much longer time scale yet if you don't do it, then after a very large time, your solution is completely off. And essentially that is the proof. Now after that, the only thing is to really make sure that you can control the remainder between the front of our formula and the exact solution there. And so it's made easier by the fact that in one day it's really explicit and so then you have to deal with that. Now maybe let me just make one comment here before we get to the proof is that what happens on r cross t2 will see that you can at least get some similar heuristics. But of course the main problem is that you're going to end up here with a system which is this time a truly non-linear system. It's the resonant system that Professor Parchesi talked about. And so then to, so something that you cannot integrate explicitly and what would be the main problem for us is something that is, so that has few conservation laws that are not so strong. So any question about this? So if not, let's try to make sense of this and to make this rigorous. So to make this rigorous that essentially means finding the right norm in which we can really close a fixed or a bootstrap control on the norm. Now the question is what should be the right controller? What should be the norms that we want to choose? And so I'll give you the full control that we can get and then we can discuss a little bit. So, or maybe I'll give you the main statement. If you start with an initial data which is small in this sense, then there exists a unique global solution with U. And so, well, we can for free add the H1 norm. This is just a conservation of the energy. It won't help us, but it can help you get started. Now, let's see. The next term norm is also not so, it's going to end up being not so important, but it's well known that it's important as an intermediate thing to control. And then we have the two important pieces. So essentially these are the kind of control that we need. So this, as I've said, is not going to be too important. This is something that is usual to get some intermediate control and I think we'll see easily, well, actually it's easy to see why it should be important because what we want to control is modulus of U squared U. And so if we get good control on this, we would get good control on modulus of U squared and then after that it's the potential that goes in front of you. That we want to control this is something that you could have guessed from the beginning because we want to make sense of the front of our formula or more generally we want to say that we can extract the main term in the stationary phase analysis. And this is possible if you have some smoothness of the amplitude and so this is really telling you that there is some smoothness of the amplitude but in a way that gets worse and worse over time. And this is maybe the surprising and not surprising bit. It's the fact that if you really want to control your solution for a large time then you really do need to have some control on A norm that is, well at least I would say or you need to have control on A norm that is scaling variant and you don't have that many choices for norms that are scaling variant because the scaling as we have seen was D over two minus one so for HS. So it would be H to the minus one half here. And so now this is a pretty bad space. You could have hoped that you would get by with X to the one half times V in L2 but in fact this is going to be defeated just by the fact that this is the solution that you expect and you see that any time you take any derivative in C you're going to get a log T that is going to pop up. So now, I mean maybe you could still, this is something that is clear for one derivative. Maybe you could hope that one half derivative would not work but that I would, I mean at least it has to be complicated. And now this is one norm which is at the right scaling and at the very least you see that it's a conservation law. It's a conserved quantity for your limit equation. In fact, it was the crux of the argument for this guy being well controlled over time. Okay, so all right, so now let's try to do it. And so the question is what, how are you going to get a uniform control on this? And it's going, it will have to go from some form of granular inequality but we have to put in one non-perturbative ingredient somewhere. And so let's first observe that this one is for free. Let's just forget about this. And let's, well, not care, I mean let's see what we would really want to have on V hat in H1 to make sense of the, of the, let me call the norm that we would like to control. It's going to be this strong norm and what we would really want is to have V hat in H1 and then everything else. Let's say V in H1 and then V hat in, well, I guess. That's it and if we had uniform control on this, we would in particular control the L1 norm of, well, if we had control on H1 we would have control on L infinity and so, now this is an L2 norm. So the reasonable way to try to control it is via energy inequality. So, and via energy inequality, which means essentially you take your operator, you eat it with your equation, it goes on the non-linearity, it's going to come to one of the U and the other one you put them in L infinity. You could really hope to get, can get. And in fact, you can get, it's fairly easy, that you control DT or maybe DT VS is smaller than something like that. Let me maybe not be too precise here, but what you can get is something like this, U L infinity squared if you're a little careful times VS. So the only place where you have to be a little careful is, are you going to get V or V hat in L infinity, which would be bad because this is not going to give you anything or U in L infinity, which you expect to decay. Now, well, if, so this you can get and now the first reasonable thing to do would be to replace this by using the fact that if you're controlling, well, by the decay of that and if you were to do that, you would get one plus T times V in a chess cube. This is what you would get if you only try to control this kind of norm. But now you see that the problem is that this kind of ground value inequality would only give you long time control, but it's an OD that blows up because again, you can do a change of variable to take out this one over T and then you have DTF is smaller than F cube. It's not going to let you get to global control. So you have to improve a bit on that. And the question is, where can you improve? So there are several things that you could hope and to some extent, this is what you have to use for our class T2. You could hope that you could do a normal form transformation to move your equation from a cubic equation to a quintic equation. So this you can do in our class T2 and it's something that gets rid of some of the terms, but in the 1D case, it would not really help you too much to do that, essentially because the nonlinearity is so simple and so resonant that it's there. So if you cannot do a normal form transformation, what you could hope is that somehow you would not really need to control by the, so somehow that you can get better control on this than just the one that I had there. And this is what happens, but now the question is, how do you get better control on the L infinity norm than just assuming that you're in the S norm? Well, you have to look at the term in the Flannover formula and if you look at this, it tells you that control of this guy is really almost the same as controlling this term uniformly. And so this can give you the idea of trying to add a control on the absolute value of the amplitude of V hat. And so if we get that, then so no uniform bound, we need to replace this formal ground valve by, well, the first step in this case, we can't really improve it, so it's still going to be there. The advantage is that this is so robust that it probably doesn't depend too much on what we put in S, so long as it's something that is controlled by L2 norms. And then we will try to do better than what we had before, and what we can do is to introduce a weak norm, so let's, which is going to be this one, which at this stage is really non-negotiable because if we cannot get this one to be uniformly bounded, then we probably have to give up on this scheme to go together. So, but if we can hope to get this one uniformly bounded, then we can hope to have something like this. And well, there is one more thing that we can always hope that we can always put for free. So let me do it this way. I don't have to care about time between zero and one. And now I will have, so this by itself probably is not enough, but what I can hope is that, so I wanted to make, and so, okay, but now I have a new quantity to control, so I need to get something, and I need to hope for something better, and what we'll show is that, and this is where the non-perturbative ingredient comes in. It's to say that we get to control this thing, but we can cancel the main term there. And so now the whole point is that these two ground value inequality allow you to get uniform control on the weak norm and get a control on the S norm that is going to grow very slowly in time. So consistent with, or if we can prove this inequality then we can propagate. And in fact, probably to the epsilon squared, and the point is that even though this one is going to grow, it's always going to be against one thing that has some fixed decay, so that this is still going to be integrable in time here and there, okay? So now the point is just to try to get those two inequalities. Maybe I'll try to now make something about what happens in R-class T2. In R-class T2 remember we want to control a chess norm with S bigger than one half. And now the problem is that the infinity norm is not even bounded at initial time. So, well, so that means that if you want to do that you need to be better here and to put some other norm that does remain bounded and the norm that remains bounded is whatever you can control for the resonant system, the asymptotic behavior. And so there is some extra work that goes there. And so now we can start the real proof and how much time do I have? Okay, so at least we can do it. So there are two things that are important to notice there. One is that in this scheme the infinity norm was absent. Let's make sure that we can easily, that we get this right control here. And the second is what is the cancellation that happens that allows us to have this better estimate? So, and again the H1 norm is conserved say through the energy, that's just not talk about it. So, we want to control this, smaller than this quantity. So what is it? The weak norm plus, yes, we get this. So that I'll get the right control that I want before. So first remark is that it suffices to look at time bigger than one because otherwise I just use the conservation of the energy. And now for time bigger than one, I'll just use the Flannover formula plus with the remainder. So the first term now doesn't really, is not really so important because this is directly okay. And the only question is what about the second term? So again, there is the phase which I'm just going to take absolute values everywhere. So, and now there is again a phase here. And now, well it's clear what I want to do is I want to say that either this, either y is small and then I gain from that or y is big and then I gain from the fact that my norm control forces v to be small for large y. So the only thing I'm going to do is decompose this into, and when is the smallness that I want to get? Well, when y squared is comparable to t, so I'll decompose this and there when y is big then I just use the fact that all of this is bounded by say two. And when y is small, this is bounded by one, this is bounded by y squared over t times v. And there was a one over routine front. And now you can just integrate it. So in this case, I can just put a y here and divide it by one over root t. And I think that's a y and no. I need something that is integrable. And then I do Cauchy-Schwarz on both sides and I get this estimate. So, well this shows you, I mean this really shows you that the cracks to get the proper decay is really to control the weak norm and then after that we don't necessarily care too much about the stronger norm. Now to control the weak norm is more complicated but not too complicated, but here there is a miracle that happens that, so well if it did not happen, I don't know what you would do and it's the fact that you have a good vector field which at least the way I understand it allows you to control x times v in L2 which is something much stronger than an L1 information on your data which is morally where the perturbative theory should stop. So here you need something else and it's some structure in the nonlinearity that allows you to get control to this thing. So, back to the equation. What I said is more about the fact that you can put x times v in the strong norm. Okay, it's probably also related to the fact that you get a better inequality for v hat for the weak norm but not in a way that I understand well. So I'm going to show you a very pedestrian way to control the weak norm. You can, after that, try to optimize it, make it nicer and nicer, but I like it because it's really the point of view of trying to find a small dent in the armor and then turning the crank to get better and better and better estimate until the point that you get what you want. So, we had our equation which was written, well, which I have erased but if you remember, it was this equation. And if you look at it in the Fourier space, it gives you precisely that. All right, and somehow out of this, we want to extract the ODE that we think is the one driving the v hat. Okay, and so what can we get? Well, and so what can we get? Formally, if we could do the stationary phase analysis, we would be done because this is the phase, the critical point, they correspond to eta equals sigma equals zero and they are exactly the one that are going to give us the ODE. Now, the problem, of course, is that we don't have that much regularity on our function, and so we need to, well, to still be able to extract the phase. And so what we're going to do is to, well, whenever it's a problem about regularity, we want to introduce the little with Pele projectors, but this time for v hat, so they will correspond more to localization in space. So decompose as to be close to the origin, there is v far from the origin, and well, where I will ask you to bear with me is that close and far are going to mean different things at different time, but there will always be expressed in terms of one parameter that are called capital R, and so let me write it this way, and so it is just, so something that is one, well, okay, let me write it, and chi is like this, whereas whenever we were talking about frequency, in the frequency we were more choosing our little with Pele to be between one and two, now we want the one that starts at zero. Okay, and so if you just think a bit about the formula for the stationary phase, what we want to do is to localize at something that would be one over the square root of the big parameter lambda, and so, and this we can do so long as our input are regular, and so that would be, and if you do the computation that would correspond to R about root T, so, and let's be, so here is for example where you could tweak it a bit, now let's be ambitious and a little bit too much to begin with and choose, and now there is another parameter, small parameter epsilon that doesn't depend on anything, and that I'll choose to be very small, and for the moment let's hope that we can really get almost to the scale of the stationary phase analysis. And well, look at this, which as we have said is, this thing, F of V, V, V, and now in the input we decompose all of those V's into the one closer to the origin, the one far to the O, the one far from the origin, so all in all we have two times two times two choices, and the hope is that most of the cases, whenever we have this it's going to lead to a small contribution, and let's see why, and so now let me assume that I have three independent inputs because it could be either the one closer to the origin or the one far from the origin, and I can see that this thing, well, if I wanted to bound it brutally I could do a Cauchy short scene two, guys and estimate them in L2, and then the last one, estimate it, well in L1 I don't have that much choice, so I could have V, no, so, and by that I just mean that I can choose the one that I want to estimate them either in L2 or in L1, and now let us see if I have two guys that are far away from the origin, then I'm done, because those are the one that I'm going to estimate in L2, right, and if R is bigger than the correct scale, then, and if I have two of them, then I get a term like that, okay? So now, out of my three inputs, I can assume that at least two of them are almost at the right scale, and once you are there, then you know that it's going to work, it's just a matter of being patient enough. So now I can assume, if we only had one at the right scale, then it would be, you would have to be more careful. Now I can assume, let's say that A and B are equal, wow, okay, that the first two inputs are close to the origin, and what that means is that those two guys are essentially smooth, so now I can start to use my stationary phase kind of formula and then restrict to the case where the phase is going to be stationary. So what does that mean? But if I assume that the first two are smooth, then I can integrate my part in eta, so gradient in eta, and the gradient in eta is what is T sigma. So decompose I into the stationary and the non-stationary where the stationary part, what we have, but I only restrict, I guess it's a kite now, to a neighborhood of the stationary point, so where this phase plus two epsilon, all right, yes, and then I have the same one. A hat of C minus eta, B hat C minus eta minus sigma, C minus eta minus sigma, C hat of C minus sigma E eta D sigma, and then the non-stationary part is wherever I am far away from the stationary point. Now what can I observe? Well, this integral is arbitrarily small. Why? Because each time I integrate my part in eta, I'm going to gain, so the worst case is I'm going to gain one over sigma T and the worst case is, well, actually, the derivative is going to fall in one of those guys and I'm going to, okay. And let's say that two is smaller than, so now I know that their frequency are at most T to the one half. I can do that N times and there is nothing that prevents me from iterating it. So as long as I have any little gain, then I manage to make this arbitrarily small. So I can gain 10 times and put all of the guys in the strong norm. Okay, so the point is that we can forget about this and now we can do all of the work that we had done before on this new i instead of the previous one. And now we just do another iteration. So I can do it here, but you would see it's exactly the same steps, except that we try to use more and more the smallness of the support that we get on the Fourier variable to force our input to be smoother and smoother. Now iterate because I start off with A, B, C. So what have I gained? I've forced Sigma to be pretty small. So now what I'm going to do is, and I know that those two guys are already about as smooth as I want. And I would like this one to be smoother. So now this time I will estimate this guy in L2 and this guy in L2. And then I have phi to the T to the one half plus epsilon Sigma in L2. And so the point is that this doesn't give me anything. And this gives me a T to the minus one quarter. Yeah. And so now this one, I know that so if C is bigger than T to the three quarter, C, then okay. So now I forced, I only knew that those two guys were as smooth as I wanted. And now I can force the third one to be a little smoother. Not smoother as much as I would like it, but a little smoother. And then we gain, so now I can assume. And so why is it good that the last one is a little smoother? Because the last one was the one preventing me from doing, using the other gradient, which was the same thing as integrating by part in Sigma. Now when I integrate by part in Sigma, I lose, I gain a T and I lose a derivative. But the moment my derivative forces me to pay less than T, then I'm in business. I can restrict a bit. And so now the stat is going to be, I stat two, I stat one, plus I stat two. And then, so I always have the same phase. I have this one, which is Sigma. And now I can restrict, but if you do the counting, so when I integrate by part, I gain a T times Sigma. And then, but if the derivative falls on the rougher input, I lose up to the T to the three quarter. So I can only, I have to force, I can only make sure that Sigma is smaller than T to the one quarter. Or that the gradient is of this size. Times A hat C minus eta, B hat C minus eta minus Sigma, C hat C minus Sigma, D eta, D Sigma. And then I stat two, is the same thing, but these are guys in my notation, one minus guy. And again, for the one which is non-stationary in the other variable, I can integrate by part as many times as I want. Each time I gain a T to a small power, but since I can iterate it as many times, I can now just forget about that one. And maybe I won't, I'll just do it already. So now we've started to restrict everything. We would like to continue gaining a bit. So we're going to have to estimate all of these in L infinity, so as to be able to use the full size of the support. And so you see that if we estimate those two guys in L infinity, we're going to gain T to the minus one half just by integrating in Sigma. And we're going to gain a bit by integrating in eta. And so the moment C gains me more than T to the one half than I'm in business. So now just by looking at this formula, here I can force one half plus. And now I can improve on the integration by part to get all the way to this size. Now there is some exercise that I will let you do yourself is that you can't really improve on that. Or the only place where you could improve is by removing the one half. And why is that? Because when you continue to integrate by part, this time the worst case is going to be when the derivative hits the cutoff, the localization factor, and there you lose more than you gain. But at least we can get to one half plus. And at this stage, you've won because, so, and then you can force. Because again by doing the same computation, you put one of them in L one, the other one in L two, you know that now all of them have to be smoother than what the stationary phase would require. You can force A to be T to the one quarter A, B to be T to the one quarter B, and C to be T to the one quarter C. And in fact, so you have to lose a bit. But the point is that T to the one half was the cutoff for the stationary phase. The moment you can go below that, we'll see that it's enough to understand this thing completely. And so now let us extract the main dynamic now that input is smooth. And so once again, where you could have made this argument a lot more elegant was to not having to turn the crank many times by repeating the same operation but finding better scales directly. But I thought it was better to do it this way because you see that the moment you start gaining a bit and after that, you know that it's going to work. And you gain little by little. Okay, so we are reduced to this case, e to the i dT v hat of C T is equal to e to the two i sigma T and we've gained something but this is not going to be too important now. It's plus, plus sigma. But where we've gained a lot is the fact that our inputs, we can assume that they are smooth in the sense that if I take a derivative of them, I get something that is T to the one quarter times itself. And so, well, now what I would like to do to see something better is to just rescale my variable to make this phase of size of about one. So let's say what, okay, eta sigma, I won't change their name but they go to one over root T, eta one over root T sigma. And then it's easy to see that. And so why is it interesting because this is the scale where this phase start to matter but now it's going to force eta to replace eta by eta over root T and you see that now here, here and there, all of the, everything but the X is going to become very small and so we'll be able to extract. And so this is going to depend essentially on X, especially that we had this smooth now. So I have the Jacobian one over T, two I eta sigma and now the T has been absorbed. Five and now I have T to the minus one half plus eta T to the minus one half sigma plus v hat of X minus eta over root T, v hat of X minus eta plus sigma over root T, v hat of X minus sigma over root T v eta v sigma. And once you see this, then it's going to be good because we already have the main decay. So the main term is never going to decay better than that. We've already identified it and the moment we gain anything, we get to something which is better than, which is better than one over T decay and so that will be integrable. So in particular, we can see v hat of X C minus, v hat of root T minus v hat of X C. It's going to be smaller than, well, the difference eta over root T times the derivative times X C v hat, but then this is at most T to the one quarter divided by one over root T. So it's still going to be like T to the minus one quarter. So in all of this times, let's say the strongest norm of v but so in all of this I could replace, I can pull out this argument and now I get one over T, I get v hat of X C T squared v hat of X C T and then times e to i eta sigma v T to the minus one half eta or chi eta d sigma plus. And now you could still be a little worried that here we're integrating on something bigger and bigger, but in fact you see that at the limit, this I could replace this by one and then I just have the integral of the Dirac mass and so this, if you count this, is equal to pi plus something that decays a little bit and so at the limit I get exactly the dynamic and the most important case is that now and I'll finish here d by dt of v hat of X C T squared is two real parts and here you see that it's important that I have well either a conservation law but at least a cancellation for the limit system of v or I guess it's two imaginary parts of v hat of X C T times dt of v hat of X C T and the point is that this guy disappears and I only get the remainder and so well after that it's essentially straightforward to finish the proof and if you go on R cross T two, well the problem that, so essentially this computation informs the norm that you're going to be able to remain under control, the weak norm and the problem that you get is that it starts to control very little so you can't really, so you have to refine a lot the previous argument and one last comment I'll do is that when you get to controlling hs with a smaller than one then you don't have any conservation law for the hs norm for your resonant system and this is why I didn't give you something about the control of all of the solution to the NLS because the only thing that you can do is construct one solution whose hs norm grows but such that there is one simple F norm a little lower that remains uniformly bounded so that even if we don't have this argument for free we have it for the solution that we're trying to get as asymptotic behavior and so while this requires a little bit of work but you can do it. All right, so in the end I didn't tell too much about the R cross T case but at least I hope this gives you two nice overview of questions related to the QB canals. So thank you very much for listening patiently. So in your scattering results is the convergence only in L2 but not in like H1 or your weighted spaces? Well, so no, not in the strongest norm. Or only in L2? Well, in anything that interpolates between the two because you have some, like the difference decays you have the top norm that grows arbitrarily slowly and by interpolation things in the middle are going to converge. So, but it is true that it's not completely satisfactory in the sense that the assumption that you get on the initial data, well you don't have scattering in that norm, but in some sense you know that you won't get uniform control of your solution in the strongest norm because it's just not true for the resonance system. Thank you, okay, so let's find the speaker again.