 So I looked on Wikipedia, apparently. Fraunhofer was a German physicist. But he died in the 1820s. And he was a glass maker. So he was making lenses. And he started, and he worked in spectroscopy basically. And so the formula, I've heard of it as the Fraunhofer formula. But based on Google, it's not so clear exactly I mean, it is called this way. At least by some people, I don't know exactly why. And I thought he derived it for the Schrodinger equation. Now it has to do with, well, the wavelength in diffraction. So it's not completely out of topic either. One other remark about, so I mentioned the results on bounds on HS norms for the cubic analysis on T and R. And I mentioned, so my references were maybe not quite right. At least I forgot the work by Capellar and Greber, who also worked on that. And finally, so this is a remark that Professor Zetkov made to me yesterday that you can, in fact, think of, let's say, the cubic analysis on this space. And at least I would understand it in the following way. So these are essentially functions which are compactly supported in the first direction. These are essentially functions which are compactly supported in the second direction. And these are essentially functions which are compactly supported. So if you go far away in infinity in the y direction, then this one stands alone. And the same thing holds for that one. So what you could do is to evolve this one and that one independently through your equation. And then you plug in the equation that this gives you for that one. And it gives you an interesting non-linear equation in that space. So then you can really start to study it. But I not really realized this thing that you get essentially that these two are independent. And they just come in as some form of potential in the non-linearity for this one. And I thought it was something nice because or at least a connection is that it's connected to some outstanding problem which is the stability of the two lines solid on 4K P2. And so if you have nothing to do, sometime today you can go on YouTube and you can go look at the lecture of, or a lecture of Professor Mizumachi in one of the previous workshops. So he didn't quite talk about the stability of the two line solid tons, but he has some nice pictures about them in his first slides. So just for motivation. Okay, well maybe I can keep this. So now the goal for today is to start entering a bit more in the study of the Schrodinger equation on our space. And so last time we saw some basic facts about how to understand the linear flow at least on RD. And now today we're going to get, well we'll continue with linear estimates, but this time it will be strict arts estimates. And so which are the things, the basic tools that we're going to use to extend our solutions. And so this is, now I forgot, I think 2B. And what I would like to present is, well I'll give you two estimates about linear solutions. Now to state them precisely, we need to restrict two frequencies and now you have two notions, you have two natural derivatives, so derivatives in the x direction, the derivatives in the y directions. And they play somewhat different role and it would be especially important for later. So I'll tell you what it is later, but this is basically the little bit projector on frequency capital X in the direction X on frequency capital Y in the direction Y. And so the claim is that if you only integrate on an interval of time of size one, then you can have pretty nice estimates. And you can have, depending on the regularity either in X or in Y. So you can choose whatever you want. Now just a little remark about this. So first, at least this one is the best that you can do in the sense that this should hold for very concentrated solution. And so in particular they should imply the one on R3 and on R3 you should lose one quarter of a derivative. Sorry, what is this few capital X? Yeah, that's right, no, you're right. So it is just something that restricts your function to the frequencies that are of size capital X in the X dual, in the dual of R and capital Y in the dual of T2 and K. And this is essentially a function which is essentially one between, if this is of size between one and two and then zero, if this is too far from that. So essentially it forces C to be of size capital X, K to be of size capital Y, and so this is essentially like one quarter of a derivative in the Y direction. And so here this is not quite optimal in the sense that the total number of derivative is one quarter plus epsilon, but at least you need to pay A price in Y. So maybe you could make it one quarter minus epsilon that I don't know, but you cannot completely avoid the loss in Y. Okay, so essentially we'll prove the first one and the second one is not so different. So, okay. And for the first one, because essentially I don't want to work with this projector, we're going to prove something slightly stronger that implies that, which is that if you forget one dimension, then you have it with no loss. And if this is true, then just by taking out the frequencies in the G2 direction, you get the other one for free. Now this estimating dates back to work by Takaoka and Zetkov. And I would say it's fantastic because it's something really striking that you can estimate this norm, the L4 norm, which you could precisely estimate by the L2 norm in R2. And here you have the exact same exponent on something which is much less dispersive than R2. So, well, essentially the first time I saw it, I found it hard to believe. So this is why I think we're going to prove this one, but just by changing a bit the proof, you could get this one in a similar way. Now, on the other hand once you believe it, it's not very difficult. So, let's do it. So, and what we're going to use is the fact that the L4 norm is just the L2 norm of the square. So now let's look not exactly at the square of that, but at the product of two free evolutions and because, well, we'll see why this is good later. So, it suffices to consider, or maybe let's consider, we're going to take the product of two solutions, and maybe let me call them psi and phi, and then estimate it by duality, by multiplying by some function in L2. And so, the idea is that this will be in L2, this should be in L2. Now, let's compute that. So, remember that now we're on R cross T. So, and to understand, well, the free evolutions, this is something much easier to understand in the Fourier space, so let's write it as the Fourier transform of its Fourier transform. So, this one is going to be psi hat K of Xie. So now I have E and E to the I T Xie. So, I have C squared plus K squared, well, in fact, that's just one dimensional. Well, I'm sorry, maybe to fit it in one line out, write down the phases all together, and then the amplitude. So, I have C squared plus K squared, and then somehow here I have E I Xie Xie I K Y, and then C hat of K Xie. Now, this is this one. I'll have to do it with the other one, E to the I eta X, E to the I P Y, C hat P of eta, H of X, Y, T, D X, D Y, D T, D X, D E eta, and I have forgotten to integrate. All right, so now that you do that, you can see that, in fact, you could integrate in X, Y and T and just get the Fourier transform of your function H. So, in fact, you simplify that to something like that of phi K of Xie phi P hat of eta, and then H is Fourier transform in all of the variables. So, in X, I have Xie plus eta in Y, I have K plus P, and then the trace of the evolution is in the Fourier transform in time, which is Xie squared plus eta squared plus K squared plus P squared, D Xie, D eta. Okay, so this is really just Parseval, or Planchevel, here. But now you see that if this is a function in L2, this is a function in L2, their variables are completely independent and I would like to be able to choose any of them. Then I can probably do nothing better than evaluate that in L2. And so now what I really have to do is estimate the norm of this one in L2. Now, this is a little problematic at first sight because it's a function that is in L2. That means that it can, and it's in L2 of three variables, X, Y, and T. And so once I fill in all of these entries, I can integrate over three guys, but now I have a fourth integral. So there has to be something in this change of variable that cancels one of the variables. All right, so suffices to show. But so now you see it's really something in some sense about the Jacobian estimate because for any L2 function, I have to estimate this by its L2 norm. So that this thing is smaller than H in L2 of X, Y, and T. And here is... So in this generality, all right, here is where I have to remember one thing, which is that I'm not integrating over all time, but I'm integrating only over a bounded interval and how am I going to use that? Well, I'm going to use that in the sense that if I... So instead of integrating over an interval of size one, I could integrate over an interval of size epsilon for some fixed epsilon. And as a result, this, the Fourier transform in time, is going to be of slow variation. And this is what I have to use. So for all H, so this is... Okay, so well, let's do it on the other hand. Well, we'll have to choose this one as one of the variables, this one as one of the variables, and then make do with what is left. So we're going to call this one sigma and this one m. And we can take this out because those are the ones that are going to be nicely integrated directly. And then we have the left over. So sum over k integral over c of h hat of sigma m. And then this thing, and I'm going to call that omega, squared d sigma d c. And there is... I don't know, I guess it's okay. Where omega is... Well, let's see. So I have c and then I have sigma minus c. So I'm going to have 2 c squared minus c sigma plus sigma squared. And then the same thing. I have k and I have m minus k. So plus 2 k squared up minus 2 k m plus m squared. And so now we have seen that the problem is that we have two integrals essentially for one variable. And the way out of this is going to be to add one more sum to that. So sum over m integral over sigma of sum over l sum over k integral over c. Because what I want to use is the fact that this might be complicated, but in fact this function doesn't really depend on omega. Or at least if I change omega by a bit, the function is not going to change too much. So 1 and then sum... No, that's h of sigma m omega c squared d sigma. And now what I want to say is that... Well, in fact, this thing I can... It doesn't really depend on omega in this interval. And so I replace this point-wise value in omega by its average on that interval. And so if I do that, I can write it this way. And so in general, this function, I can only bound it by some constant times its average. And then I have the supremum over l of the sum over k integral of sigma of that thing. So at least formality is correct. Now why was that such a good idea? So where we used since h hat... Let me forget about those two variables slowly varying. And so now you see how you should choose the epsilon so that at least you can make sure that this function is constant on the intervals of size 1. Okay, so this you can always do. And in fact you could always do whatever the width of the interval that you're choosing. But here is something that is really surprising that happens in this case. And it's something that is a nice computation. So this, I would let you do it in the ROR when you go back home. And it is the fact that this is bounded. What well of this quantity? 2 c minus sigma over 2 squared plus k minus m over 2 squared plus something. And in fact super l, sigma and m is uniformly bounded. Now yes, some number. So now I'll let you prove that. What is there? It's the fact that if you look... So you're integrating over z times r. And so what is this? Well you can, for every value, for every integer value you have one line. And what you're really doing is counting or looking at the Lebesgue measure of when this quantity is between l and l plus one. So what that essentially means is that you're taking any center inside your domain and you're looking at any radius, any circle of a given radius r which is going to be something like l plus sigma squared plus m squared, the square root of that. And you're looking at the area of your domain. The total length of those lines which are between r and r plus one over r. And the surprising thing is that this is in fact uniformly bounded. Here you could observe that if instead of integrating over the line I was just integrating over the whole of r2, so I would have to compute the area of this domain then it would be no problem. Why is it surprising that when I integrate over z cross r I still get something uniformly bounded? Because if I were just counting, if instead of having z cross r as my domain I was trying to do the same computation but now with only integer frequencies then this is no longer true. In fact you can find arbitrarily, I mean you can find, it's true on average of course but you can find some radius of, some radius arbitrarily large such that this grows like the, well at least it grows slowly like the divisor bound. And this is related to what Professor Parchesi said that if you look at the number of integer point on big circles then you could have arbitrarily many. And actually something that I learned recently which I thought was worth while this, which is that the divisor bound is sharp. There is, you can really saturate it with this exponential log log over log. So and now once we've made this observation then let's just observe that it's over because if this supremum is uniformly bounded by one I can just forget about it and now I have this sum that becomes just an integral and I'm in business. Alright, so a few observations about this. First, well you will treat this in a fairly obvious way to get this estimate except that there you will have to look at the number of integer points and this is where you get this way to the epsilon, this extra loss. Second, one thing which is worth noticing is that we had this estimate on r cross t and we did something completely trivial. We got an estimate on r cross c2 that is still optimal in the sense that it is scale invariant. And so essentially the way I understand it is that the dispersion really helps you once in one direction and it helps you when you integrate in the time direction regardless of how many entries you have here. So what you can do in higher dimension is lower this exponent because then you can try to... Alright, so that's it for the first estimate. Now one last thing that I would like you to observe from this is that it implies a stronger version of it which is a bilinear strict arts estimate. And maybe to make it simple let me... So this is again a nice exercise to do and I would encourage you to try to do it in two ways. One would be just to go through what we have been doing and then use the fact that we really had a bilinear estimate in these guys. Another one is that this is in fact in this case a formal consequence of that one if you use Galilean invariance and orthogonality properly. And well, I guess this is always true but it is mostly useful in this case. And what that means is that whenever you look at the product of two solutions, not only can you use this estimate to bound it but you can do better because the loss of derivatives you can always assume that it falls on the slower frequency, on the smaller frequency. Why prime is equal to y? I would expect to see again the y to the one fourth. No, because you get it twice. You should bound it by the L4 norm of it. And in fact, I think as maybe I learned I think from Professor Gérard is that just from this you can read directly that you should be locally well-posed in H1 half because you can, well, just from this. All right, and now I just... Well, maybe let's stay here. So this is it for the estimate on the linear solutions. Now what I would like to discuss is how you can use this to pass to estimate for non-linear solutions. And here you see that there is something to be understood because even though whenever you only used the regular strict arts estimate, then it's essentially clear how you're going to use them in your non-linear analysis because you're going to work in a space of functions such that this norm is going to be finite. However, if you want to use by linear strict arts estimate it becomes more complicated because what is the space of function is that if I take two I have a gain like that it's probably not going to be a nice linear space. So, well, a priori it's complicated except that you can remember that you're going to produce your solutions by iteration starting from linear solutions and so if you knew that your functions were always some form of superposition of linear solutions then you would get that for free and it would see be a linear space. And so this is the next thing that I would like to discuss and it's what I call a functional analytic trick, a cheat. So I'm going to talk about it but I'm not going to prove it and it's this fact that if you have something for linear solution then you have something for nonlinear solution. So if you can show an estimate like that so this is your space, let me call that M space and in our case it's going to be r plus t2. So if you can get some nice estimate for product of linear solutions then for all initial data you have zero in the ball of your space so let me not. So we try to keep it fairly general even though it's going to be false in food generality probably but I want to make a statement that is going to be true every time I'm going to use it. So in the ball of size so some constant let me forget about it one epsilon minus one in the ball in the space H there exists a unique solution of NLS which is continuous in your interval of time with values in your field of space H and in some more complicated space such that u of zero equals well u of zero. So in food generality as I said this is probably false however and I will again not prove it but I would like to motivate it a bit to get some idea where it comes from and why one could hope for something like that. So and what is the motivation for this? Well if you remember our equation and you can solve it using the variation of constant or the Duhamel formula and it's going to read something like this so this is just the usual Duhamel formula and so you see that better than the solution u itself this is something that is going to evolve a lot there is one thing that is supposed to be much more stable and it is u when you conjugate it by the free evolution so when you straighten it along those characteristics or this thing that I'm going to call v of t and so more stable and well then maybe the equation for v becomes just and this thing I will call is something that is going to happen all the time and I'll call that v i of the interaction of v v v or in other words u of zero and now you see that to make sense of your equation you really only have to make sense of this object now this is a function of t and let's assume that you would like to hope that v is in some Hilbert space or whatever in fact you probably want to evaluate something like that by duality so let's look at it in a product with something that is going to be in the dual space of what we want and because we can't forget about the other thing and so now let's write what it is it is the integral over time of integral from zero to t e to the minus is delta no plus to the minus is delta v of t e to the minus is delta v of t v of s is delta v of s h of t and then there is a t now I can so if I work in say L2 or something in L2 and this is going to be self-adjoint I can put it on the other side to get something which is a little more symmetric and now what I can do is switch those two time integrations because the only thing that at this stage really depends on t is that and now what I just get is the integral over s of e to the minus is delta v of s e to the minus is delta v of s and then e to the minus is delta h of t bar h of s ds where this is and let's assume that we work in L2 where h of s is the integral from t of h of s ds and now you start to see something that resembles very much what we are assuming control of and it would be exactly this if you could say that v of s is indeed very stable in the sense that it doesn't change too much and that it can be approximated essentially by a constant so the initial data let's say u0 u0 u0 and that you have the same property for this capital H but at least this capital H is some integral so even if you started with some weaker information on this little h then this has more chance to be true now to make it into a nonlinear solution you will have to do some fixed point argument etc so you think that you expect that one of these entries is going to be replaced by the difference of one p-care iteration and the next and so this is why in general you want to have three different functions now and one last remark is that you can also say that if this holds then at the very least your flow if it exists this is its third derivative at zero and this tells you that its third derivative at zero is bounded in the space H okay so now where is the cheat the cheat is in saying that well that you can form spaces X that are going to do this trick and so I think it started with some work of Pauquins and this is how we introduce the XSB spaces what I am going to use secretly is more some refinements of those that were introduced by I think the Tarou but at least that you can learn about in the article that I wanted to talk about yesterday and if I can give you my personal opinion so the XSB spaces are easier to define but there is a cost of entry if you want to work with them and my impression is that it's essentially the same cost of entry that you have if you want to understand what those are and how to work with them and for what we are going to for what we are going to use today both of these cases would work because we are never going to really be sharp in terms of the regularity we can always lose a little bit and I just want to say one last thing about this which is that once you understand that then it's good but it still leaves you with some question about how to choose this space H and there is something which is essentially equivalent to that but I personally find easier to guess what the right space H is when you are to get some kind of estimate and so this is the last thing I want to add is that it suffices in most cases because it depends obviously on what this space H is but let's say if it's a sobolev space something which is a little bit maybe less symmetric but that I find easier so let me not repeat this and so what you need to be able to do is to say that the two worst cases the two worst functions I'll estimate them in L2 and then the leftovers they are the ones that are going to define my space H and so once you start to see this then you start to see for example where H1 half is going to appear also a permutation but really this is just to make it more formal what happens is that you have to estimate the two worst term whatever that means in L2 in the weaker space and then if you can still get something bounded with the two nicer guys in your space H you have a nice theory in H all right so this was just a motivation but I don't think I want to talk more about this so if you have any question about that or a comment now is a good time to do it excuse me, are you in space for this lemma why you don't need the regularity of X to control the solution and why the first term itself can't control L4 well I guess we saw the proof so that's I think maybe you should pass this too I mean is it because you are restricted in 0 or 1 no so the point is that you get to choose to some extent which of the gradient you want to use to control your solution but certainly I mean all of this is bounded by the sum of the frequency to the one quarter this is just something a little bit more refined and it won't be important for us today but when we start looking at this modified scattering the whole point will be to be very economical with the regularity in the tourist direction and this is why it will be important for us to make sure that we only lose in one direction where we agreed from the beginning to lose a lot but on the direction where we want to have precise control we lose as little as possible okay and so now after this essentially with one or two ideas the proof that we are going to see today will rely a lot on estimating things like that so maybe let me do it once to show that you have local well-posedness in H one half and after that I hope you will just believe me for the computations just how much time do I have? 10-13 minutes okay alright so then at least we can do yeah at least let's do earlier on this maybe more sketchy in the modified scattering so now let me set this proposition and so you could get to one half I just don't want to be too technical in fact I would like so this is essentially a consequence of what we had before so what we need to do is to estimate something like this and then so you have this phi phi one phi two phi four phi x phi t and so where now I would like to forget about this distinction between the regularity in x or the regularity in y at this stage I could probably do it because I'm only going to use my L4 estimate that we had before that we're only hitting regularity in y however well after that I want to use a slight variant that has to mix both directions so and what is this so Pm is now the same thing as before except that it's the supremum of the two that is of size m and it's exactly of size m and so the integral that I have to use we need to control the sum choices of m1 m2 m3 m4 of i m1 m2 m3 m4 right because I can decompose my function into all of its Fourier coefficient etc and then I have to sum over all the possibilities and there is really one thing that we need to remark is that so first of all we can reorder them that is a little lie in the sense that some of them should have bars, some of them should not but just by changing I mean it won't make any difference all of the estimates that we have proved they could have bars or not it doesn't change anything and you could really just conjugate and change the time so once you accept this then it is clear that we can reorder them but there is one new thing that we have to remark that in fact the two highest frequencies they have to be comparable and why is that so let me just write it and why is that well just decompose this I mean write the Fourier inversion of all of this and what you're going to do is to integrate so this one would be at frequency c1 this one would be at frequency c2 c3 c4 and now you integrate that over x and so you're going to integrate e to the i c1 c2 c3 c4 x dx and well this is four more if I'm really on r cross t2 but it's really you can make sense of that and you see that this is going to be zero unless all of the frequencies add up to zero and now if one of them is much bigger than the three others then this can never happen unless two highest comparable and now once you know that the two highest are comparable well you cannot say anything about their difference they could be very small and so really this is the only thing that you can do in general but the amazing thing is that this is completely enough for us because now once we observe that then we have a product of four functions and we know that whenever we take the product of two we can estimate that in L2 and things go well and so okay so now or maybe let me just so assuming this order is going to be bigger and I'm going to add a all the time group one big frequency with one small frequency why because I have seen that whenever I have a product I can always the less on the lowest frequency and so if I have two terms I would like to forget both of the highest frequencies and now I'm going to use the estimate that I have of course just erased but that says that I can estimate that with by one half of the derivative of the smallest so in this case it would be m3 and in this case it would be m4 and now all of them are in L1 in L2 okay and now I would like to have an estimate like that and I remember that I'm not working in L2 space that would be too much I'm working in HS space so the two the two worst guys I'm going to keep them in L2 and the two lowest I'm going to start using the information about the space and so in particular I can start adding one half derivative but I can even add s derivatives so I'm going to have minus s here and then here HS and here HS and now that's it so why is that it well I'm going to so those two I only control them in L2 but they are twice the same essentially so it's really I can just do Cauchy-Schwarz and then I can sum over those frequencies those guys well I have to eat them all but they come with a negative power so I could really eat them all and get something bounded by the HS norm and that would give me local existence for small data now if I want to have local existence for large data I have to produce an epsilon in front of it but this is not something particularly difficult because at least for high frequencies this is already given directly by this estimate so let's say if the one of them M3 is bigger than some number I have something bounded by as we have seen phi 1 in L2 phi 2 in L2 in HS phi 4 in HS and if I only add the if I only sum over M3 with a negative power for M3 big then I it's a dyadic series I pick the last the first term and I'm getting A to the minus S-1 half so what that means is that if my third largest frequency is big then I'm good but on the other hand if my third largest frequency was small then I didn't really have any evolution I essentially just have a bounded function and so I could just use a cruder estimate for M3 small use instead and so I'm going to estimate one and then I have to gain some smallness somewhere but what I gain is the time and so I get T to the one half A to the three times phi 1 in L2 phi 3 in L2 and so you see that if I plug that back in I also leads to well now I'm going to add those numbers all the way to my threshold A and so I'm going to get T to the one half A to the three half times the functions and in fact I can do it for both terms so I could get that twice and so now when I add those two things I see that at least if I choose the time small enough well I'm going to choose this one A large to be able to produce a small epsilon and then I'm going to use to choose the time small enough so that I also have an epsilon here and I'll get the estimate so this tells you that whenever you start with an initial data in HS for any S bigger than one half you always have a global solution and so you always have a local solution on an interval of time which has a good chance of being fairly small and in particular if you start smooth then in particular in H1 then you know you can find a uniform bound on your H1 norm and so this gives you you can gain a little epsilon everywhere here can be made global if initial data is in H1 and now what I wanted to at least discuss a bit is what happens if you're not in H1 if your initial data is not in H1 okay so first this is this is the end of the introduction saying that whenever you're in H1 half you have really a nice local theory and everything can be solved by fixed points so this is really the best possible scenario and now the question and if your initial data is not in H1 then well at least you cannot really use the energy the fact that the energy gives you a uniform bound in fact we have seen that it is possible that any norm that it has is going to grow so you have to try to do something but still you can decompose so you can at least make the following observation that let's say your initial data you can decompose it into a part so first of all what is the difference between H1 half and H1 well you're not going to see this difference for the frequencies which are bounded because you're just putting a weight and if c is smaller than 1 then 1 plus c to the 1 or 1 plus c to the 10 is not going to make a difference so the only difference is in the tail of the decomposition so now let's try to decompose our function into something in the bulk where the particular topology in which we estimate it doesn't really change plus a tail so what we see is that the bulk gives me a nice H1 function so and if I imagine that the bulk contains most of the spectrum of my solution then I could just try to truncate it so could lead to a global solution but of course I have to I would have to correct for an error but this error is going to be important for us is that it's not in H1 that would be too good but it's much better than where I can make sure that these are errors so tail and so where can I make sure that these are errors well when I estimate them in H1 half or maybe with this proof in H1 half plus and in H1 half plus well if it's the high frequencies by a Chebyshev argument I can bound it by dh s norm and I get and so this is small for n large enough ok now I have a nice perturbation theory so I can make sure that this distinction between the two pieces is going to be valid for a small time and so what is something that could make me hopeful that by working a bit more I can really make it for a larger time is the fact that we have a decomposition in two guys but both of those guys get better in their property when s increases and of course if I were to reach one then it would be great because I could push n all the way to infinity and forget about this but what that means is that at the very least this distinction should get better and longer and longer time when n increases and then well at this stage you can say can you make it global or not and the point is that you can so now to make it global what you see is going to be crucial is to be able to track down well when something is going to leave the part of my solution which has frequency n and get in the tail to make it grow so that my tail which started small is not small anymore so to keep this decomposition we would like motion of frequencies and this is where you have to play a bit and so I think this kind of discussion was first proposed by Bourgain it could really prove that you can make sense of this in a very strong term in the sense that you revolve this part non-linearly you revolve this part linearly and then you can always keep some form of this decomposition provided s is large enough so s is close enough to 1 and what we're going to see is some variants that was proposed by the I team later to say that there is a way to tweak this a little bit which is to not abruptly remove the tail by using the so-called I method and so what is this so is that we would still be able to track down the bulk of our energy this way but we need to be more gentle with the tail and we would like to use in the best way possible the fact that we're working in HS so instead of looking at the projector which would essentially means that we're only looking at our function multiplied by that or the way I would think about it we're going to look at a multiplier which is well let me draw it so we also want something that is that gives us the same information in the bulk but we don't necessarily want to enforce that it becomes 0 too fast we instead want to enforce the fact that it decays at infinity like like this and so now we know that you want your function to be below this curve so below 1 for the first end frequencies and then decaying slowly so as to make sure that it remains pretty smooth then you essentially have one unique way and so and as we're going to see one other way to look at it is that instead of looking at your solution so you're going to look at the solution this times U and so they call that multiplier I because it's some form of integration and when you know that when you have good properties for bounds on IU the information that this gives you for U so if U if IU is bounded then U is below the following envelope so U is going to be IU divided by that and so as a result you have very good control on your solution for the first Fourier modes and you have worse and worse control as your frequency becomes larger and larger but somehow you want to make it to compensate it by the fact that as your frequency gets larger and larger because you're away from the space where you had a nice perturbative theory you also get some smallness from that from there and observe that this is much better than if you enforce your solution to only have control on this times your solution because this would give you that your solution but if you have control on p to the n times U then your solution is you only know that it is below this envelope and so here there is a way that it could go to infinity too fast for it to be under control and now the last preliminary remark is how are you going to control your solution and this is a way to get started the process that at least Professor Vichilia talked about and maybe also Daniel talked about is how are you going to control your solution solutions in HS or in fact in some norm you would like to say that this piece if it was in H1 then you would use it to think that the energy is controlled so you want to control the HS norm and so you want to look at an energy and you can start by looking at something like that and so at this stage you can now that we have U or IU you can play the same game either with one or the other whenever we do estimate will in fact work mostly and this is how I'm going to where I'm going to state the more precise results but just to say see why another way to get to the energy let me work with you because it is slightly nicer and so alright then I'll just spare you the computation I'll just give you the last result and then we'll see why why it was better to why it is good to control IU and so well once you have computed it then you get you can just plug that in and you get an explicit computation and you get this thing and then you get the same thing as above and now I can explain why this gets you started with a nice quantity so you chose I in such a way that it is one for low frequencies so now decompose all of those guys well okay maybe first we see that we have essentially so we have the same guy on the right hand side and then on the other side we have either Laplace of IU which is a pretty scary thing because we are assuming derivatives so much less than two or we have three copies of IU so this is always much better than that one so let's focus on this one now we can observe that if I was one then this should vanish and I is indeed equal to one if all of the frequencies are small so now we already know that for these two not be zero one of the guy is going to be forced to be a tail and so at this stage you start to gain a smallness factor but then the amazing thing is that you get one additional cancellation because if only one of the frequencies were small and all the other were if only one of the frequency was so you know that at least one frequency is big and all the others are small and now if only one of them were small then this would be one this would be one and you would have the same number of frequencies on both sides and the frequency of that term would be mostly the same as the frequency of that term so the I would fall on the biggest frequency and so you would get again another cancellation there so for all practical purposes you can force in this expression of four terms two terms to be at high frequencies and then after that it's turning the crank and I'll give you the two steps next time and then after that it's really just estimating carefully where those commutators arise so I'm sorry if I went a bit over time So the global result here is for S greater than or half? No the global result is only for five over six and you could but this is by no means sharp you could improve it if you were so this is just by using this thing the so called first generation I method and if you wanted you could improve it the important thing for us was what to get something that was below one that was global and was no more growing and then to say that in fact you just cannot do better and just when I see it in all of these estimates so the L4 norm is the only estimate that we use and so all of this would not see the would work in the same way if it was on the torus