 Thank you very much for the invitation and thank you very much to the organizers for inviting me to this very beautiful conference. I must say I'm definitely not the one who has been knowing Fong for the longest time, but still I remember the first time I met him. I was very young. It was in 1995. It was my first conference in the U.S. in Minneapolis. And I actually remember very well Fong's talk. He was talking about Zakharov equation. And what I remember most is two things. First, I didn't understand a lot. And probably there are two main reasons that I was very young. I was working on completely different things. And Fong might not have been the best speaker at the time. Second, what I really remember most is really that while listening to this, he was speaking, he was doing different mathematics. He was doing something which was not mainstream and which was different from what the other people were talking in this conference. And to the day that's really what strikes me most about Fong's work is first, I often don't understand. But second, it's always interesting and always very original. So happy birthday Fong. Okay, so what I'm going to talk to you about today is based on John's work with Herbert Koch, Nikola Svetkov, and Nikola Visilia. And so it's about probabilistic and deterministic scattering. And in this order, which is, I will start with probabilistic and then I will show you that the probabilistic idea has actually led to a deterministic result, which is a little unusual because these times it's more often the other way around. It's more like people in probabilistic theory are trying to redo in some other similar context, but people have been doing in the deterministic setting. So I will explain to you the known results about some known results. Of course, I'm not going to give you a complete panorama of the field about the deterministic nonlinear Schrodinger equations. And then I will explain to you what we are trying to do in terms of Cauchy theory and scattering for random initial data in the context of nonlinear Schrodinger. And then I will come back to deterministic theory. Okay, so first I thought all my talk would be about, well, defocusing NLS, except at the very end on the deterministic side where I will just say a few words, because the deterministic thing has nothing to do with whether the equation is defocusing or focusing. Okay, so you consider, so this is the nonlinear Schrodinger equation with nonlinearity u to the p, initial data phi. Of course, you have two conserved quantities, the L2 norm and the energy. So this is defocusing because there is a plus sign here. The energy controls both norms. There's no competition. And once you have this equation, you know that you have three card estimates for linear solution. So you know that for the linear solution, you have an LP, LQ estimate global in time controlled by the L2 norm of the initial data and the dual of three cards known for the source term. And then once you have that very standard argument, contraction argument, show that the Cauchy problem is locally well posed for H1 initial data if p is not too large, depending on the dimension. And now this is the subcritical H1 subcritical theory. So this is a strict inequality that I put. I will not be interested in critical problems. And then you know that the time existence is bounded from below if the H1 norm is bounded from above. And then you have local existence, uniform time of existence, control of the H1 norm. Nothing can happen. You can iterate your contraction argument. And then you have global existence for a very, very standard thing. And of course this method tells nothing about the behavior of the solution. So you want to understand what happens for a large time. So that's the first question that comes to mind once you have a global Cauchy theory like that. Okay, so if you look at the different values of p, if you have 1 plus 2 over n, which is the discriminant between short range where p is larger than 1 plus 2 over n, and long range that should be, and whereas p is smaller. So what happens in the long range setting is that, well, you can say things, there are some partial results, but you know that no solution can be asymptotic to a linear solution. It doesn't work, at least in dimension greater than 2. Dimension 1 to the best of my knowledge, it works until the quadratic non-linearity. And then it's not clear, or probably nobody expects it, it's also in dimension 1 to have a scattering. Okay, so now you have the famous classical result, which is scattering results in H1. So I guess the first result of the kernel is due to Gini Velo, and then I put some names, but many other contributors, and I put basically the name or the setting that I will be using. So you assume that you are in the small range regime, and then you know that for any H1 initial data, you can find asymptotic space 5 plus minus in H1 also, so that when you look at the difference of your nonlinear solution and the linear solution associated to 5 plus minus, then it goes to 0 when t goes to plus or minus infinity. It means that the solution, the nonlinear solution, is in large time asymptotic to a linear solution. So in some sense you have, that's, well, easiest case of soliton resolution in Conjecture, there are no solitons and the solution, the nonlinear solution, is asymptotic to a linear solution. Okay, of course you can write it this way, but usually people prefer to write it this way. It's essentially equivalent, of course, because you apply e to the minus it plus to both terms. It's an isometry on H1, and so this is the same thing. Okay, so this is H1 scattering. This is the property I will be interested in. Okay, so now I have put many contributors. So Gini Velo, as I said, Nakanishi, and then Colion de Christa, Filanita Katao, Planchon Vega, Visilia, and as I said, it's only a small list of the people who contributed to this. Okay, so now the question I want to understand a little bit are the following. Can you find some other regulatory thresholds? Second question, can you describe the behavior of the wave operators? Because wave operators are really a nonlinear object. So the map sends the initial data to the asymptotic solution. The proof tells you that they exist. They are explicit in terms of the solution, but not in terms of the initial data. Also, can you give at least beginning of reasonable descriptions? And then another set of questions which are very much related to random initial data is the fact that you know that NLS is ill-posed at some regularity. So if initial data are not smooth enough, and this is the threshold here, then you know that there are no reasonable solutions. Actually, so you have some kind of a norm, instantaneous norm inflation for the solutions. And so it cannot solve NLS at that level of regularity. And the question I want to address somehow is whether this is a generic phenomenon or whether it is only due to some very special initial data. So actually, if you want to prove this kind of result ill-posedness, the initial data you consider are explicit. There are something like that, phi of x over epsilon and epsilon to the minus alpha. So it's just initial data which concentrates at a point, what could have taken minus 6-0, of course, concentrate at x-0, and then you just put the epsilon factor in front to ensure that your function, your family of function is bounded in HS. So it's really its point concentrations. And of course, point concentrations or functions which concentrate at the point are definitely not typical functions. So, of course, they exist. But if you think of choosing a function randomly, it's never going to concentrate at a point in some sense. Okay, so now I'm going to speak a little bit about random initial data. And so I have to tell you the rule of the game. And the rules of the game are here to ensure two things. First is that I'm going to prove some results. So the rules have to be not too drastic. And second, I have not to cheat. So there has to be some rules. Otherwise, I will show you some examples of, well, you can start saying things which are crazy. Okay, so the rule is the following. You want to undo the set of distributions with a non-trivial probability measure which is supported in some HS space so at some level of regularity. And then you want to show that you can solve your equation locally or in some cases globally in time for new almost every data. In some cases, on a set of positive measures, which sometimes may already be something. And then you want to also ensure that the measures are supported on a set in here HS where you know that you don't have a good deterministic theory. That's the part I'm not cheating. But of course, there has to be a balance. You see, your initial data have still to enjoy some additional properties with respect to the HS space because otherwise you come to these kind of counter examples. For this one, the solutions are going to be poorly behaved. So you want to know that functions of that kind in some sense have zero measure. Okay, and then you want to go further. You want, for example, to show that in the case of this talk that we have almost sure is scattering. And in some cases, we want to be able to show precisely the behavior of the wave operators. Probably called Tolbia. Okay, so now a little bit of history. So the starting point is definitely Bougain's work on Gibbs measures for NLS where he implemented this strategy precisely because it was a different idea somehow. So for Bougain, he had the Gibbs measures which were living at a very bad regularity level. So, namely just below L2. And the NLS in 2D is well-posed only for H epsilon. So there was a small gap. And so, since he wanted to study the Gibbs measure, he had, he was forced to develop this Cauchy theory for this particular random initial data distributed according to the Gibbs measure. So now, later with Svetkov, we actually changed the point of view. The idea we had was that we're not going to let probability be an enemy. What we want to find is some examples on the contrary where probability is going to help you. Probability is going to give you some new examples for which you can say things, whereas in the deterministic case, it's bad. So somehow, we wanted to give examples of data for which NLS is much better behaved than what would have been predicted by the deterministic theory. Okay, so now I can give you the, to my knowledge, the very first result of that kind. So it's a result on a one-dimensional. So where the result is the following. You can construct measures supported on a H-manus epsilon. So it's actually in a base of space B02 infinity for the experts. For which you can prove global existence for NLS in 1D. Whatever the, well, the nice point is that it doesn't see the size of the non-linearity. It's true for any P. Again, if P is larger than 1 plus 2 over N, N is 1, sorry, so P larger than 3, you have almost sure scattering. And 3 is the best index you can hope for because it's the result that I reminded you a little before. Below 3, there's not going to be any scattering result. Okay, and now second, which is a remarkable thing, it's a following. It's, you see, the initial data here leaves at negative regularity, but the scattering phenomenon here happens at positive regularity. So it's an interesting thing. So you see, you look at two terms, neither belong to this space, but the difference is more regular. So it's something that tells you it's precisely somehow accomplished some part of the program. You have a description of the wave operator, the map U0 sends 5 plus minus. So it's actually equal to the identity plus a term which is smoother and decays. And the convergence in your scattering result satisfies a better convergence than the regularity of the two terms. Okay, so actually a few comments. The initial data L2 are not better, so it's possible to show that they are in no H epsilon space for epsilon positive. They don't have any decay in X in L2 norms. So they don't enter the usual deterministic theory. And what is nice too is that it's not only that, well, we have a very good statistical description of the evolution of the measures. And also here, what you see here, you have good decay properties for the solutions. And if you look a little bit, if you forget about this log to the one-half, this is the best decay you can hope for because it's the decay given by the linear solutions. And it's true even for small piece. So you don't have scattering, but still you have the same decay as for the linear solution except for log loss, almost surely. Okay, so now a little history. So after this result, there were many, many extensions in this direction. So Aurélien Poiré proved, so it's more, almost sure, but it's global well-positeness and scattering on a set of positive measures for more general measures. And then you have a set of results by Boingman, Donson, Neumann, Mendelssohn for random perturbation of energy critical NLS. And then extensions to higher dimension in the radial case. And then the beautiful result, simultaneous result by Combs and Schenzofer, who will also have global well-positeness and scattering for some other measures than the one I'm considering here. So in this business, you can play with the method you're using and you can also play with the kind of measures you are using. So that's a lot of room to work with. Okay, and a nice thing about all that is that all the results in this, all these results in the random data almost sure scattering, they exhibit the same couple of properties. And it's inherent to the method of proof. You have well-positeness at the regulatory threshold where deterministic NLS is ill-posed. That's one of the motivations for using random data in this setting. And you have this smoothing property of the wave operator which is the identity plus a smoothing operator. And so natural question of course is whether this randomization is important for the second property or not. But it actually turns out that you just have to ask the question and you have the answer. The answer is no. So actually this property which came from because in the context of random Cauchy theory it's natural to prove a result like that then it's actually always true or at least true in a large setting. So now I just state one example but what I think is that this is a quite general result and we checked only this one but there's no reason for it not to be general with some numerology of course. So let's fix the ideas. You fix dimension to be 3, p equals 3 so it's the cubic NLS in dimension 3 so perhaps one of the most simple models. Then it's easy to see that it is short range and you have Gini 1, Velo H1 scattering results which shows you that for any initial data in H1 you have asymptotic space W plus minus of phi and your solution is asymptotic to the linear solution associated to this asymptotic space in H1. Now the result is the following. Same as for random data. The wave operator W plus minus is the identity plus smoothing and you actually have convergence. It's the same term. You have convergence in H2. H2 minus something. Exactly the same thing. With the proof which is completely different and my thing is that the proof of this theorem is actually, well, if you want to go up to H2 you have to put some technology but I will show you it's in two pages and of the talk the proof for H3 half. Okay, so now let's go back to random data and I'm going to define, of course, in all the business you have to define the measures and if I state, okay, I can define you a measure for which there is almost your scattering I said nothing because I can take, of course, that mu is delta U0 equals 0. So an interesting thing. So I have to define the measures. I have to tell you they are non-trivial and they enjoy still some nice properties. Okay, so one way, there are plenty of ways to define probability measures and one way is to use the compactification of our D by using the harmonic oscillator. So you have, if you look at the dimension D, the harmonic oscillator, you have the eigenfunctions which are Hermit functions and they have eigenvalues which are 2n plus D and if you look at the kernel of the harmonic oscillator minus 2n plus D, it has large dimensions so it's essentially a synthetic 2n to the D minus 1 and you can decompose, of course, it has compact resolves so you can decompose any L2 function on a basis, on an eigenbasis of L2 made by eigenfunctions for the harmonic oscillator so I decompose initial data in L2 I can summon n on k, these are the coefficients times the eigenfunctions. Now the play is very simple. I am going just to randomize these coefficients so I'm going just to change randomly the coefficients and the only constraints are, well, here I choose independent so I change the coefficients independently identically distributed which are basically I choose Gaussian of mean 0 and variance 1 but any random variable with fast decaying law would work and now the probability measure I consider is the law of the random variable that I get by this randomization. So basically what I'm saying is when I say something new almost surely and think for almost every initial data of this form. At first glance it looks like I've done nothing I started from a function in L2 and I just changed the coefficients randomly and you think that it should not be very different from the initial function. So actually it is very different. Okay, so now I have to assume a non-pinching property so I have to ensure that my L2 mass is distributed evenly on the E n eigenspace. So there is mass on many eigenfunctions on many of the Hermit functions h and k and so this is this property saying that each coefficient is smaller than a constant time the average of the coefficients. So it means that I can have some very small coefficients I can put say half of the coefficients to 0 that's not a problem but I cannot have large coefficients with respect to the others. The coefficients cannot be larger than the average of course the value of a has no importance whatsoever. Okay, so now under this assumption it's possible to prove so hs is defined here so it's the hs-harmonic space which is the set of functions so for non-negative s is the set of functions for or there should be an s here the xsu is in L2 and x to the su is in L2. So if I take u0 so u0 is my function which I used to define the measure so if I take u0 in hs then almost surely my random function is in hs. So basically everything happens as if the Gaussian random variables were bounded the Gaussian random variable are essentially bounded. Now this is some kind of a good result I'm not considering some too bad functions. Now this is a result which shows you that I'm not cheating. If I take u0 in hs for possibly an s larger than the one which was here then almost surely my function is not more regular and not decaying faster. Meaning that in terms of L2 regularity L2 integrability my randomization did not improve things. If I started with a function which was not very regular almost surely my new family of function is not very regular. In terms of L2 norm now in terms of LP norm it's much better. You have an LP you have s derivative and x derivative as soon as okay so this is true if you started with 0. So you started with an u0 in L2 and then you have some decay and some smoothness but measures in LP norms. So you see some kind of a reverse Sobolev inequality. Usually if you want to bound the L infinity norm by the L2 norm you're going to lose d over 2 derivatives. Here is the other way around. You start with L2 and then you have s derivative in L infinity. So it's Sobolev embedding the other way around. Okay so of course it's not true for p equal infinity but true for all the others. So in some sense my initial attack do sorry do have some decay but not measured in L2 norm. A small comment. This is not unreasonable to ask. Why so? You are in RD. So a typical function is going to spread over RD. So it's exactly the opposite of that. So it's rather something like that. And now this if you measure in high Sobolev norms it's small. So this is the kind of phenomenon that's happening here. Okay so now I'm going to set another almost sure scattering result for which the proof is actually not very difficult. So I will tell you it's less technical than the 1d result that I stated at the beginning. So it's a new result that telling you there. So it's also cubic and Ls. You take dimension smaller than 4. You take initial data in L2. You randomize. So you have a measure which is supported in L2 and if U0 is not smoother than L2 your initial data are not smoother than L2 and then tells you the following. So almost surely you have scattering in L2 and actually you have better because the wave operator is the identity plus a term which is in h1. So it's getting one derivative and one decay in the power of x. And as usual now you have convergence in h1. Okay so now a few words about how we prove a result like that. So the first step is to use a compactification of spacetime in some sense by using so-called lens transform. Then you prove that you have a local probabilistic Cauchy theory for harmonic oscillator equation. Nonlinear harmonic oscillator Schrodinger equations that I'm going to show you. And then you have a globalization argument which is actually very easy. So it's just an energy estimate and the only tricky thing is to find the good energy to choose and once you have it it's essentially... It's not difficult. Okay so first the lens transform. So the lens transform is defined here. So it's an explicit transformation that I'm not going to write. But the thing is that this transformation conjugates the flow of the linear and the nonlinear Schrodinger equations with and without harmonic potential. So first this tells you that you have a conjugation of the linear solution. Now instead of saying you I'm just going to write on the board a commutative diagram that is the summary of this. Ten years ago I was able to write an exact short sequence on the board and today I can write a commutative diagram. That's the kind of fulfillment of a career. So you start from L2 you have a time t equals zero you have L0 and L0 is the identity. Just check. Then you have L2. Here you have the flow of linear Schrodinger equation e to the i t Laplace. Here you have the flow of e to the i s h and here you have Lt and you have L2 L2. Just check it's trivial that L0 and Lt are isometries on L2. That's what this factor makes for you. You have t which is related to s by s of t is tangent of 2t over 2 and now lemma 1 is this diagram commutes. To solve the linear equation here you just have to take L0 solve the linear equation here and go back. Lemma 2 is another diagram commutes which is here you have the flow of cubic NLS s of t t prime which is actually psi of t minus t prime because the equation is autonomous and on the right you have the flow of another Schrodinger equation which is s tilde of s as prime and I have to put here t prime and s tilde is the flow of this non-autonomous sorry it's the other way around it's L of t sorry this is t t prime and this is s s prime sorry the s is on the without harmonic oscillator and t is with harmonic oscillator okay and so lemma 2 is just this diagram commutes right and now once you have that s in minus infinity plus infinity here is of course equivalent to t in minus pi over 4 pi over 4 so solving globally the NLS is equivalent to solving the non-autonomous non-linear Schrodinger equation on minus pi over 4 open on both sides and scattering is equivalent to proving here closed on both sides you're going to have right and now just a comment yes here you see the result is important that we put this on the left because I cannot apply e to the i tilde plus to both sides because e to the i tilde plus on the harmonic h1 space and so it's important you see when I do something like that it's just a comment about yes if I use the diagram here I start here I apply my length transform which is L0 I solve the nonlinear then I apply e to the minus ish I come back here and I come back here very easily because at time 0 the length transform is the identity and so that's why I need to apply this operator to go back at time 0 and once I am at time 0 I can transfer estimates here to estimates here so maybe I'm going to to skip the rest essentially well it might be very fast so how do you prove local crochet theory so it's actually this way you look at the solution as the linear solution plus a smoother perturbation that's what I said in terms of probability the fact that something will be smoother is inherent to the method of proof usually the equation is not going to be well posed at the level of regularity of my initial data but at the h1 level of regularity in that case it is well posed and now for ve for my perturbation I have a harmonic nonlinear Schrodinger equation with a term which is like that but basically if you look of course there are a bunch of cross terms but basically if I keep only the two extreme terms I have a source term and a non-linearity the probabilistic estimates tell me that actually the source term is bad in terms of L2 norms but the cube of the source term is good so this is a good term which essentially lives at the h1 level and now the cubic term is not a problem because I can solve the cubic NLS in h1 in dimension 3 without any problem it's not even critical so that's the idea and then very rapidly to have an energy estimate which is this one so this is what happens and then you just make a calculation and I think I'm going to skip this part and now go back to the deterministic smoothing property for the wave operator so I go back to my deterministic result and so I remind you that the result is the following so now there is no more probability in the game I just take the old h1 scattering solutions so this is Gini Van Velo result telling me that the solution is asymptotic to a linear solution in h1 and the theorem is that we actually have a better convergence property ok so what I think is that basically we use just h1 sub-criticality and we expect this phenomenon to be true in a very general context ok so let me show you the proof in the simpler case of where instead of trying to reach h2 you will reach h3 half and I will explain at the end how to reach h2 it needs a little more technology so actually the technology here was all done at the end of the 90s so it could have been written at the end of the 90s this remark ok so we do exactly what we did there should be a U here we do exactly what we did for the probabilistic theory ok the solution is going to be the linear solution plus a correction and the correction is going to be smoother and the only thing is that you want it to be smoother but globally in time and so the equation is just that v is the solution of the linear solution with a source term which is modus of u2u and now we are going to prove the simplest estimate that u2u in L1hs is bounded and then by Duramel formula v which is this integral will converge in hs to the integral from 0 to infinity it's actually e to the minus it Laplace v which will converge ok and we proceed in two steps first step just use catering and prove that you have a global L1h1 norm and second step use by linear smoothing to prove that we have a global L1h1 hs norm first step is here prove that 3 cards estimates are globally in time bounded either you decide it comes from the proof of scattering or you can also prove that scattering implies that 3 cards norms are globally bounded and so you have a bound here and then you just have to estimate the L1h1 norm of u2u so you take one derivative and you bound this and you have u squared gradient u and gradient u you put it in L infinity L2 so u in L infinity h1 and the other term is in L2L infinity and of course Holder tells you that this will be L1h1 and now L2L infinity is bounded by Sobo-Lefmann buildings by L2W16 dimension 3 we only need L1W16 1 half plus epsilon so you have some slack okay and that's it so L1h1 norm basically that's it now a little more this is the page for the L1h3 norm so I will call you the three dimensional by linear smoothing effect if you take two solutions of the linear equation then for any smaller than one half you have the product of the first solution times the gradient one plus s of the second solution in L2L2 is bounded by the product of the h1 norms of the initial data so you can go up to three half minus zero right and now you use the previous estimate to write u as a superposition of linear solutions so u is by Duhamel formula the linear solution associated to u0 minus the Duhamel term and you write it this way integral from zero to infinity you have this one s smaller than t to account for the t bound e to the right t Laplace which was here and g of s d mu of s and g of s is this function and the measure here is you put the two terms in the same integral you have the big measure part plus the direct source which will account for this term and so it means that u is the linear free linear evolution applied to an integral which converges in L1h1 and that's how it's finished u gradient 1 plus s u in terms of the double integral with s1 and s2 which amounts to your two terms both have a linear evolution and if you just apply Minkowski the norm of an integral is smaller than the integral of the norms and now you have two products of free solutions as before with parameters s1 and s2 you just apply this smaller than the double integral of the h1 norm which is finite by the previous step ok so now you have a good bound on this and you just plug the good bound you redo what you have done before except that now you gain one half of derivatives the L1hs norm of u square u is bounded essentially the same so you have u square gradient s u because of course gradient s of u cube is essentially this little technique but it's essentially standard and you just do hold there L2L infinity and here you have L2L2 bound that you control by the previous step L2L infinity is still bounded and you still have a slack of one half of derivatives and that's it I'm just going to finish if you want to improve and go up to h2 well you have to use a duality argument you have to do this by linear smoothing against you one half of derivatives so to gain one you have to do a tt star argument to gain and to do that well the good functional setting is what people know now it's a dual setting for solving NLS which is the u2v2 spaces of and this allows for nice transfer argument as here I just prove by hand a transfer argument so I have an estimate for linear solutions and I transfer it to my non-linear solution so u2v2 is going to do it for you too the advantage is that you have nice duality settings which will allow you to perform this tt star argument then it's more technical and this part is more recent technology probably could not have been written in the 90s ok so just a conclusion so the proof I just show you is actually using only previously known results to my knowledge it's the first deterministic result which was motivated by ideas coming from probability and it's important for me because I think that people would have thought before about a result like that if they had done what we did with the probability strategy because it's very natural to do this once you've done this probability but of course it uses none of the techniques and the probability that we use I think this should be much more general than the small particular case that I showed you as you saw it the proof here uses nothing about the focusing property it just uses the fact that you know that your solution scatters easy in the focusing property but it's also known to be true in some focusing cases so what I just say is that you can use this as a black box in some cases so once you know you have scattering you can implement a strategy like that and I guess it should be true also for many other equations I think it should be true also for wave equations as soon as you know you have scattering ok, going to stop here thank you very much for your attention and happy about it are there some questions or comments? you need to speak louder I can't hear very well so if you can have information about the wave the frequency the low frequency and the high frequency that's something that is in the work program for subsequent works now the compactness is it's a nonlinear operator of course it tells you something but probably what we would like to be able to prove in some sense is that try to find some regimes that say that the perturbation is going to be small in some sense but it's now what we know is it's compactness with one derivative slack which is something we need to find the measure is it fundamental that you compactify with a harmonic oscillator or do you do this with a... you can do many kinds of you can match on the board some other so you see yes, over here these people are using some different measure randomization so you can do something like that for example which is another very simple one so what is important is randomization for which you are able to describe very well the linear evolution it doesn't sound so easy for the harmonic oscillator because if you take a harmonic oscillator and then you have the linear evolution but for the Schrodinger equation without harmonic potential but the lens transform tells you that you have a good description of this linear evolution right? and so you can... some other popular measure is it's just by Fourier you take q0 equal and you access some of cubes of chi of dx, cubes of u0 you can decompose a function in Fourier, in cubes and now you can randomize something you put random variables which are IID and you do something like that and in that case the theorem will tell you that if you look at the LP norms the LP norms will be better because at the level of each cube if you take small cubes the LP norms will be better and randomization is telling you that you have a decoupling so it tells you that somehow you randomize q is like the norm of the sum is like the sum of let's say the sum of normal qq of dx using q square you have a decoupling which is induced by the randomization the LQ norm of the sum is like the sum of the LQ norm square to the one half so you have gained a lot in some... of course this is false but in some sense it's true then this does not affect the choice of like this does not affect the procedure with less transform later if you work with a measure like that then you cannot use the length transform because this randomization is badly behaved with respect to the length transform but it's well behaved well behaved with respect to the usual flow because essentially if you apply e to the it laplace this commutes right? so this is e to the it laplace applied to usual you have a good description of the linear flow in that case too without length transform but the constraint is something like that you can randomize but you need to have a good description of your randomized data as a question so relax can you gain a little detail? right we do deterministically? yes I don't think so but I don't know I have no result in whichever direction but I don't think so what about taking the example with Carlos imagine that you have a kind of soliton resolution so you have a variation there do you expect a disposition time? this is a particularly simple model you see you have yes, you just have to understand the self interaction right? if you have a soliton resolution then you have a lot of the terms you have the solitons and the interaction with the radiation the radiation term should be no problem but you have to understand the interaction of the solitons but make the calculation and see I don't know but it should not be very difficult to get convinced whether you have something or not any more questions? so I guess we start again in 15 minutes