 Donc lundi ça va être un autre exposé. Lundi ça va être un autre exposé. Lundi à Paris 7 ça va être un autre exposé. Et en fait comment tu vas faire pour Paris 7 ? Tu vas aller à quelle heure ? Parce que tu m'as donné la salle, c'est à 4h30. C'est à 4h30 oui. En fait c'est facile, c'est juste que tu vas au cinquième étage non ? Non. Là où il a les matos c'est ? Non c'est pas à la maison. Après rues je compte venir depuis chez moi. 5 minutes large. C'est juste un petit, c'est juste un petit goût. Je voudrais remercier les organisateurs pour l'invitation à cette très belle rencontre. J'ai appris beaucoup de choses depuis les précédents discussions. Donc j'étais très heureux de voir des gens que je n'ai jamais rencontrés. Je savais juste que j'ai lu quelques de leurs papiers. Par exemple j'ai lu quelques de vos lectures. J'étais heureux de vous voir dans la vie. Je suis très heureux d'être ici et de voir des nouveaux gens qui sont un peu de refreshement pour moi. En fait pour des raisons familiales je ne fais pas beaucoup de workshops. Je suis vraiment heureux d'être un peu dans une différence. Je suis heureux que mon talk ne soit pas vraiment dans la main de la conférence. Mais j'espère que vous allez trouver des collections sur ce que vous savez. Je vais parler de la question des types NLS. C'est une question des types d'Android Talk avec des données d'initialité de white noise. Ce qui signifie des données d'initialité très singulaires. Et donc ce que je vais parler c'est un travail joint avec l'un des héros qui est dans... Je ne sais pas comment ça marche. Qui est dans l'université d'Edinbourg. Et Yuzhou Wang, qui est dans l'université de Birmingham. Donc comme vous voyez c'est une sorte de manifestation moderne de l'entente cordiale. La collaboration vulgérielle et chineuse. Donc, premièrement, je voudrais vous montrer ce que je veux dire par la white noise sur le cercle. Et donc, c'est quelque chose qui peut être défendu en deux manières. Je n'ai déjà parlé de ça. C'est un processus gauchon qui est identifié. C'est un processus gauchon qui est identifié. C'est celui-là. Et aussi, il y a la définition de la corrélation qui n'est pas utilisée dans ses paroles. Donc je vais utiliser des définitions plus concrètes. En fait, la première sera l'importance pour mon paroles. Donc, nous allons fixer le gn pour être une famille d'indépendants, standard, complexes variables gauchon. Alors, peut-être que je dois dire ce que je veux dire par complexe gauchon variable. C'est simplement quelque chose comme celui-là, h et n, dépendant. Et dans le gauchon évalué. Donc, c'est ce que je veux dire par standard gauchon. Oui, oui, oui. Il y a un i. Et puis, si j'ai une famille d'indépendants, bien sûr, ce n'est déjà pas évident que vous avez une famille d'indépendants, mais si nous savons que j'ai une famille d'indépendants, je peux définir la noise blanche sur le cercle comme la distribution de la variable d'indépendants définie par la suivante série d'indépendants. Donc, dans mon talk, u omega of x sera cette série d'indépendants. C'est l'exponential de la base de l'exponential et j'utilise l'exponential multiplié par les gauchons d'indépendants. Donc, c'est un produit infinit de gauchons d'indépendants. C'est bien, c'est très sympa d'évaluer cette série, mais bien sûr, c'est très singular. La série c'est parce que les coefficients de Fourier ne sont pas dégâchées. En fait, elles créent. Elles créent comme des lois de fan, comme vous le savez, une séquence d'indépendants. Donc, c'est vraiment un grand problème parce qu'une sorte de choses ne sont pas réellement fonctionnées. Elles sont en fait des distributions. Donc, la façon dont nous devons voir les choses c'est que nous avons la map, qui, à omega, donne cet effet de distribution. Et en fait, je peux même être plus précis, cette distribution est dans le espace de Sobolev Hs. C'est très petit, plus petit que minus 1,5. Et donc, le espace de Sobolev est bien sûr, parce que si je prends l'Union de tous les espaces de Sobolev, je prends toutes les distributions périodiques de période de pi. Ok, cet objet vit dans le espace de Sobolev Hs, plus petit que minus 1,5. Et en fait, il ne peut pas être vu comme une variable de random avec une valeur de h minus 1,5. Parce que c'est quasiment surement infinité de h minus 1,5. Donc, ce que nous avons vu, c'est que la map induisait une variable de random et la distribution de cette variable de random est de notre noix blanc. Et en fait, notre goal va être, en fait, de sauver des PDEs avec des données initiales. Et si c'était une pédie linéaire, on sait comment le faire, depuis plusieurs années, mais pour une pédie non linéaire, c'est un problème. Donc, c'est un moyen de voir la noix blanc de la site furiaire. Mais on peut aussi le voir dans le espace physique, en fait. Et c'est un moyen de le voir. Donc, en utilisant la théorie centrale, on peut voir, en fait, cette mesure, la mesure sur hs, que c'est en fait le limiter du week-end, que s'il y a de l'infinité de la distribution de cette variable de random. Donc, ce sont des variables de random hs. Comme vous le savez, les fonctions de delta sont exactement en hs, plus ou moins que 1,5. Donc, je peux voir des copies indépendantes de cette variable de random. J'explique, c'est écrit après. Donc, xn est distribué selon la mesure uniforme sur les torres. Ok? Donc, si tu prends ces summes de fonctions de delta selon la mesure uniforme sur les torres, x, des variables de 0, avec ce que j'ai besoin pour x, je dis que je prends gauchon, mais ce que j'ai besoin, vraiment, c'est d'être 0, et avec les variables 1. Donc, pour exemple, plus ou moins 1, c'est ok. Et donc, si tu vois cette variable de random, donc je ne vais pas faire la compétition, mais c'est une computation de 1, si tu computes les matrices de covariance de cette variabe, c'est l'identité. Je veux dire, si tu appris ceci à une fonction texte, et que tu prends l'expectation de ceci à une fonction texte de 1, une fonction texte test, tu... c'est trivial. Je veux dire, c'est trivial. C'est la distribution de ces... de ces summes converges à la white noise de la circle. Pour moi, c'est trivial. Je ne sais pas si c'est bien ou non. C'est juste, je ne vais pas utiliser ceci, mais je vais juste dire qu'on peut aussi voir dans le spécifique, pas seulement dans le furiais, et donc, j'aime ces deux les deux. Dans mon talk, je vais mainly utiliser seulement celui-là, mais c'est bien de l'avoir dans ma tête, et en fait, il y a un freedom. C'est une question où il prouve moins que ce que je vais vous dire. Mais sa façon de voir la white noise c'était celui-là. Et donc, en fait, j'ai été inspiré par son papier quand j'ai écrit ceci. Mais c'est un moyen d'entendre que dans l'équation de Schrodinger, dans ce contexte, peut-être c'est une direction promissante d'utiliser ceci de l'approximation et de voir ce qui survive de l'équation dans le contexte d'équation de Schrodinger. C'est un moyen de voir ceci dans le contexte d'une prochaine direction. Juste pour dire qu'on peut aussi voir la white noise comme un limiter dans le espace physique et je pense que ce n'est pas exploré. Donc, pour le moment, il n'y a pas de PD dans mon talk, il n'y a que des mesures dans Hs ou si vous préférez des mesures gauches dans un espace très négatif. Comment ces mesures sont connectées à la PD? Donc, la connexion vient de comment cette white noise est connectée à la PD. Le point est que si nous utilisons l'autonomie en termes de l'équation de Fourier, on peut formally définir les mesures précédentes comme cette. Pourquoi? Parce que si vous avez une fonction sur le torus, maintenant nous pouvons développer une série de Fourier et puis, si nous identifions la fonction avec la séquence de la coefficient de Fourier et nous savons que l'autonomie est essentiellement la summe de cn2. Donc, si je regarde quelque chose comme cette expression formale devrait être entendue comme si je mets l'équivalent c'est l'exponential minus Ndu je peux le faire comme un produit de cn pour que il soit un peu comme la mesure, si je identifie le produit de la mesure sur chaque coefficient et ici, bien sûr, cette norme autonome donne une mesure gaussian sur chaque coefficient. Bien sûr, cela ne marche pas. Je dois ici multiplier par 2p ou quelque chose comme ça pour faire sur chaque produit. Ok. Je le vois comme ça. Et puis, parce que je peux continuer et développer l'exponential du produit de l'exponential et je peux commencer à voir ça comme ça. Donc, le problème est ce n'est pas une mesure de probabilité parce que la facture 2p est faite. Et donc, c'est pourquoi j'ai mis cette z à la force minus 1. C'est en fait l'infinité, parce que tout le monde doit mettre 1 par 2p et puis compenser ce n'est pas multiplier par 2p. Et ça fait que c'est ce qu'on appelle pour mettre la z à la force minus 1. Ce n'est pas une mesure de probabilité mais le modèle de 2p est une mesure de probabilité. Et en fait, ce qu'on sait comment définir c'est un produit l'infinité de probabilité. Et donc, on voit que en fait l'infinité de la noise peut être naturellement seen as this Gaussian measure, in fact, in very negative sobolev space. Et in fact it is true that the measure I write in this form a way and the measure given as some of random variables at that way are the same in fact. But if I write it like this, I make a connection to PDE because if I have a PDE such that the A2 norm is conserved and such that which is Hamiltonian which means that the volume element is conserved then I can hope to have this measure as an invariant measure. But of course in order to do so I should solve my PDE with this crazy this very singular initial data. And if I have a way to solve it then I will solve my problem. And in fact there are many important Hamiltonian PDE which are conserving the A2 norm. For instance the KDV equation is for instance you have KDV which conserves the A2 norm. You also have say the non-linear Schrodinger which will be my subject today. So here it's real valid function and here is so this is the non-linear Schrodinger equation. So here are two PDE preserving the A2 norm and so for this PDE we may hope to solve them with this initial data and have the white noise as invariant measure of course all this is to be precise and I hope in the next slide I will explain you what this means because of course when we solve non-linear PDE with such data we should really be careful what we talk about. And then I just should mention that the whole difficulty is in the local in time theory because if you solve have a good local in time theory there is a general globalization argument introduced 25 years ago by Bourguin which says that once you solve locally the problem with such a data there is a way to exploit the invariance of the measure as a conservation law to pass from local to global solution. And therefore the issue is to show locally in time the problem with such data for instance non-linear showed equation or Kdv. So let us try to formulate the result. So my goal is in the next 5, 10 minutes to formulate the result. And then I will see what I will explain on the proof. So my goal will be to solve just to be quick the problem I discussed is already solved for Kdv since 15 years maybe but it's still open for the non-linear Schrodinger equation. And what I will do that's what I'm saying for Kdv it is solved. But for focusing and de-focusing for both I am not aware for, ok, so already for the de-focusing but even for the de-focusing case I'm not aware of the result you talk about. Ok, so let us put it with minus to be the focusing case ok but it's not written, ok so ok, even, this is a good point even if they solved the problem in negative space it's not yet done. I will explain you why ok, but as you say we are very near but indeed I should be careful there is a book by Kapeller where they consider but do they solve it in minus one? I think they solve it in L2 because this this data as I explained my first slide belongs to H minus one-half minus epsilon for every epsilon almost surely. So the initial data is the regularity H minus one so let us say the things like this maybe integrability will do the business but it is not done yet and I personally of course but even if they do it, there is a work to be done ok, it's not absolutely immediate but we approach the problem without integrability and I will explain you, it's nice discussion maybe later, even if you solve it by integrability method, the approximation you will use will be different I will try to emphasize on this point so even if one day we solve the method by integrability or by the method, I suggest the two solutions will not be apriori the same, I will try to explain you later why, because here we are in a situation where the solution depends on the approximation I take on the initial data and that's the definition of this super critical problem as we did with Nicolas Buflitz for the wave equation so we are in situations where the solution you obtain depends on the way you approximate the initial data if you approximate in one way, you get something if you approximate in another way which was the counter example of Le Bon you take infinity so I believe we are in such a world where the object itself depends on the approximation like for single stochastic PD we always make convolution of the noise with convolution if you take other approximation I think you have no limit for this direction but I'm pretty sure it depends on this type of approximation so here we have similar thing for NLS, but not for KDT ok, let us it's interesting point we can discuss but since by the methods I know we cannot solve the problem for the true NLS we will take more dispersion and for such a problem as you see we can solve the problem that's the main result but let us see what we can solve ok, let's be very careful because the formulation of the result is important and these issues of approximations are also important to be understood so the first good thing here is that so I will take my model to be what I call for NLS it means I take more dispersion and then we can solve this problem for regular initial data for L2 for instance so we can solve this problem for data which is in HS as big record as 0 for instance for same affinity data we can define nice global solution so once my philosophy is that once I am in such a situation the solutions I obtain for singular data should be connected to the regular flow and somehow there should be limits of the regular flow otherwise it is not so clear, I mean I don't find it so natural and that's why the result by Flamdoli is remarkable because he solves, he obtains the solution of the Euler equation a unique limit of smooth solution of Euler equation which are known since Yudovic and previous work like the one by Albevery or Cosero, they don't connect their solutions to smooth flow of the Euler equation that was the main novelty recent paper by Franco so I think that this should be the philosophy if you are in situation the problem has global solutions the limit object should be related to the smooth flow otherwise I at least don't find this natural and so I will do exactly this I will take my white noise initial data and I will convolve it so as you see I take a particular approximation I take approximation by convolution and so I take the following u0n I call it like this u0n omega I'm sorry the omega now goes upstairs here is downstairs but it is simply the troncation of the Fourier series the Dirichlet troncation now this is a perfectly defined same infinity function as you see actually I can take even more general troncation but it is just a convolution with some kernel such that the Fourier transform is of type theta of n minus n with theta some localized bound function on r so if I take theta to be the characteristic function of minus 1 1 I obtain this troncation but as you see I don't take the more natural convolution which is to have a bound function which is with compact support in x I take which is with compact support in Fourier of course I strongly believe that our probe generalizes to the other convolution but already our paper is too technical so it's a question of checking that all the things are ok and our probe to go from one to another convolution ok you see what I take my convolution is this rho n is not with compact support in x it's in compact support in the Fourier side ok it's I think it's a natural troncation and so this will be my troncation of the initial data this approximation of the initial data and for this approximation of the initial data I can define a global solution u omega n of my equation which is in fact a global smooth solution with this data we need white noise and we troncated white noise I have global smooth solution by the fact that I have a global flow for this equation and then the whole issue of solving the problem with this initial data is whether this sequence has a limit if it has a limit for me I solved the problem in some sense if it has a limit up to subsequence then I obtain weak solution for this sequence convergence I obtain strong solution that's my philosophy and of course unfortunately it's not proven but essentially there is a a result of guo shows that this sequence has no chance to have a convergence in any space and I don't have time to explain why however I multiply this sequence by some random oscillation of module 1 so cn is a random real random variable and of course it goes to plus infinity as n goes to infinity actually it's positive so it's a divergent sequence highly oscillating random sequence if I multiply by such a highly oscillating random sequence then the sequence of the smooth solution has a limit and that's the way the solution is defined so I take the natural approximation of smooth functions and this sequence of smooth solutions of the equation with troncated data doesn't have limit even if it's not proven formally but I strongly don't however if you multiply it by some strongly it has a limit after this renormalization there is a limit and that's what I'm saying in the next slide so this is a soft formulation of our result so the soft formulation of our result it says follows so I write it slightly differently but it's the same as you see the oscillating factor looks more complicated on the slide than what I wrote but it is independent of what we say is that actually the sequence alone doesn't have limit but if I multiply it by this huge factor actually it looks very big on the on the slide but actually this doesn't depend on x ok it's only depending on t it's a complex number of modules 1 ok so the main part of the sequence is this small guy because it's a really function of x which is very singular in the case that this sequence converges almost surely in continuous functions with better fit hs for s more than minus one half so I wrote it slightly differently I just say that because the A2 norm is conserved this guy is the same in time t and time 0 and in fact by plus or equality I even have an expression for my cn actually cn omega there is something like this so this is the oscillating factor exponential et the sum of the first n gaussian square this is the renormalization factor it's very explicit in fact even if it may look I write it like this because that's the way you prove it in fact ok so and then this was the part of solving as I say we solved the equation with this issue that's my way to say that I solved the equation and then as a byproduct you also have the invariance of the white noise what does it mean it means that if I take later times in time bigger than t I can still develop my solution in that way the initial data is something like this or gaussians and so for later times I have some other gaussians which are gn this is the t they are not gaussians it's just some function of the t and g1 g2 and up to infinity this is the I say nothing just I write that the solution in time t depends on the infinitely many gaussians I had a zero so it's a kind of absolutely crazy combination of the gaussians at time t it's non linear PDE but since we are we are in in dealing with the solution of my particular problem this random variable follows the law of the complex gaussians and more importantly they are all mutually independent so it's remarkable so you take your solution it evolves under your equation and after time t there is the development however it still keeps the gaussians and they still remain independent that's a way to formulate the invariance of the white noise so this is of course a remarkable statement for me it's not only a statement of the regular solution it's a statement of a singular solution it's also a statement of nice property of smooth solutions in fact you have smooth solutions which somehow behave at the limit as white noise it means that if I can say that for very large n I have a smooth solution which is close to something like this which I think itself is interesting already and let me now compare because I mentioned the paper by Franco so Franco Flandoli proves similarly without in the context of Euler equation of course it's in the 2D Euler equation everything here is in 2D so what are the differences the difference is that Franco doesn't need to make this re-normalization for the Euler equation it's not needed but what is the most important difference that's why it's weak solution that the limit is only modulus some subsequence and also the approximation is not by convolution it's related so the result of Franco is as follows there exists an approximation of the white noise such that the corresponding smooth solution have a subsequence converging is here not the whole and so how he proves it by using this delta function approximation and the work by Poulverentier et Marker so it's very nice paper I profit to advertise and also in Italy of course he proves less but still I find very interesting for the Euler equation ok and now I emphasize that we only obtain this limit for some particular approximation of the initial data not any approximation so that we have room actually we can prove our result really down to Schrodinger if you prove here if you take here minus dix square alpha we still works as far as alpha is bigger than 1 actually our proof works down to Schrodinger but it completely breaks down for Schrodinger and that's why ok we wrote it for the fourth order because it's more fancy equation but it would work down to fraction of Schrodinger with dispersion slightly bigger than 2 so we didn't write the details but it's true but there is a really important thing to do to go to alpha equal to y it's not technical so we should admit that in the present moment we don't know how to deal with the usual NLS neither from this approach we are epsilon close by this method and maybe integrability people also will get closer so in my opinion in the future years we will solve the problem for the true Schrodinger equation but for the moment we know how to solve it and I will try to convince you that analysis offers new features compared to previous works on similar problems that will be the go of my the remainder part of my talk at least so first of all I would like to mention that for Kdv this problem is solved but in this case the problem is much easier and the reason is that for Kdv the local well-posedness is purely deterministic as Sergei mentioned by Kappeler Tupov we have several different proofs I mentioned all contributors here so for Kdv the problem is easier and in fact it's deterministic and the fact that we solve the Cauchy problem in deterministic way has an impact the impact is here that for Kdv 1K shoulder convergence of the sequence or for any regularization of the initial data and without so for Kdv you don't need this but what is more important for Kdv here I don't need for Kdv to take regularization by convolutional trocation of the Fourier transform the deterministic results of the people mentioned there says that any approximation of the white noise by some smooth functions then we give sequence of smooth solution which converge in the appropriate topology so as you see for Kdv problem is somehow deterministic because any approximation would give the same limit the problem really relies on probabilistic methods when the limit I obtain depends on the approximation and so I strongly it was the case in our papers with Nicolas Burt for the wave equation it can be the case for the NLS white noise data in any case for the moment as you see I don't claim that any approximation give a limit I just claim that I have one approximation giving a limit and nobody excludes that the other one will give a limit so the parallel with stochastic integral is very natural here as you know you have the analog of the smooth flow would be stiltest integral and then depending on the approximation you have different limits so something like this is not excluded for this kind of PDE so ok where is technically why the problem is more difficult because the important property of Kdv equation as many people here know is the absence of resonances when we restrict to 0 mean for solutions and so the fact that you don't have this resonances gives some very nice regularization property that even if your data is a very singular the remainder part of the solution becomes much more regular because of absence of resonances and even very strong regularization in our case even if we can remove a large part of the resonances as was done yesterday by André still there are some resonances remaining as Sergei would say the integral part of the equation and actually this integral part of the equation makes our life difficult that's the difficulty here ok ok no it's not the 0 for a mode the problem are the resonances the 0 for a mode is one thing the other thing is that there are many resonances for NLS which are not up there for Kdv for Kdv the resonant function so it becomes a bit special for Kdv the resonant function is mn1 n-n1 and as far as all modes are different from 0 this very huge guy NLS the resonant function is something like this so there are 4 this can vanish and even by this renormalization like André you can remove some of the resonances there is one integrable one which remains and for us it's a problem we should deal with it this contains the main part of the solution and for our perturbative methods on low regularity this is an issue it is ok we will discuss this later but there is not only the 0 mode the fact that there are resonances which remains if you prefer for Kdv the only resonance is a 0 and you can queue it by reducing to 0 to mean value 0 solutions for NLS you can remove many resonances by the trick André explained yesterday but still there is one remaining I'm sorry it becomes very specialized discussion I understand that people who are not in the field don't understand what I talk about I'm sorry ok ok we will now see this removing of resonances so let us now actually I did this presentation of the main result because it was more somehow natural way to say the only thing we needed to know is that we can solve the problem with regular data but let us now try to explain a more precise formulation of the result the interest of this formulation is not only that is more precise it is also that with this formulation we will see a more precise structure of the solution and this is really the novelty of what we do so the however we will now see in which sense the obtain satisfy a limit equation and we will give a description of the solution so for that purpose we do exactly what André did yesterday so I'll be quick so the way you can see the change of the equation he did yesterday is the following one so if you take a solution of your equation and if you make the phase change we already saw then the changed function by this change of variable so you take u multiplied by this modulus of one factor which you know it doesn't depend on x the equation which is very close to the first one but better it is better because it erases this autonome and this makes some resonance term it is appearing I will explain later and so in fact in the whole work we don't solve the original equation which is written here we solve really this equation as André and Sergei did yesterday so the point is that we will solve the equation with data which is in negative sobolev spaces which means that autonome is infinite so you see the role of the disrenormalization so essentially here is a huge simplification because my solution will be in some infinite autonome but the difference of these two guys makes sense so essentially written like this the equation doesn't make sense for data which is not in autonome but the difference makes sense especially if you write it on the Fourier side so what I will solve is this equation with this nonlinearity interpreted in the Fourier side but as you see I try to make this message for me it's not so important which is the limited equation for me what is important is to make a limit in the approximate solution to take such a limit if I take the limit then somehow the fact that I solve the equation in which sense it's no less important for me the important is really a limit and this is from probabilistic viewpoint exactly what you do all the time but for PDP people it's less natural because we have this distribution theory and we have some kind who should solve in the sense of distribution so I start to believe that actually the important point is really the limit and this I learned from probabilistic so this is a more a more precise version of the main result so what we say is that if you take the solution of the problem with renormalised nonlinearity then you can then you can write with initial data by the white noise then you have a solution which is given by some mentor plus something which is more regular ok and then in fact what happens is that what happens is that here if I don't have dissociating factor I simply have the free evolution starting from the white noise this is the main novelty the main novelty is that I put in the each term of this decomposition some strongly oscillating factor and so this is the structure of the solution instead of having free solution plus something more regular the solution is something like this series so this is the main novelty is that in fact the solution belongs to this sum so as you see this is the this is the free evolution oscillation and here I take this is the non linear effect so I should try this with other with other cover this is which this term makes the result new and interesting and then I have the Fourier coefficient of the initial data plus something more regular so this is the structure of the solution and so this one written in blue is additional oscillation we take into account and which makes our life difficult how to justify such a decomposition of the solution where a truly non linear effect is taken into account of the main part of the solution and in fact it turns out that the limit we obtain in the soft formulation of the result after the multiplication by cn is actually the same solution I write here ok there is no miracle I said that there is a limit actually the limit solves this renormalized equation but I prefer to formulate it like this otherwise we should motivate this equation and the formula component of the solution is consulting arbitrary high powers of the random initial data what I mean by this you will be clear I hope in the next slide so this is the structure of the solution and so I will try now to compare I have how much time now 50 minutes or a bit less ok so I will however spend time here even if I sacrifice any technicality so I think this is the important slide because in fact if you know I do something a bit stupid here but it is important this exponential I can develop it as a sum of the power you know exponential the development of exponential as sum of t to the power n over n factorial so if I develop this exponential I can see this guy as a sum of multilinear expressions with respect to the initial Gaussians which are ordered by the orders and so the expression of order 2k plus 1 is this one as you see this is a kind of gn to the power 2k gn so this is a 2k plus 1 multilinear expression with respect to the initial data and I claim that this comes from the k plus 1 picker iteration so what I mean by this I would if I explain this I am satisfied by my talk because that's the main message essentially I need to solve my equation no matter in fact what is the equation and actually the initial data is some initial data which is my white noise and so formally at least I can solve any equation if I was doing algebra what is the picker iteration I just recall the picker iteration is to consider the sequence which starts by 0 and such that each next step of the iteration I solve the linear equation with the prescribed initial data so this is the picker iteration and one thing which is good in the picker iteration is that at least formally you can of course write the series if you are if you are interested and say that in fact your solution here is the following structure the solution of the equation at least formally is written as a sum of q1 of u0 so a linear part of u0 this is a free evolution because it's a cubic equation the next one could be u1 u1 u1 it's a three linear part of the solution which is simply three linear expression of the data you see what is the first picker iteration if I take u0 0 is the linear evolution which is linear with respect to u0 the next iteration is I take the first to the cube so it becomes a three linear expression of the initial data so the structure, formal structure of the solution is like this so because it's cubic the next one will be a five linear expression and then the next one will be seven linear expression and so on actually only the so when I write seven times it's seven linear expression and then infinitely many linear expression of u0 so this is the structure of the solution this is the algebraic structure of any solution of any PDE of course once I write this development I do nothing from analytic viewpoint because I should say where this expression makes sense and so what happens is that in fact I cannot formally this expression and as you see there are linear, three linear, four linear, three linear, five linear, seven linear and so on so what I say is that this expression here plus one linear expression with respect to the initial data which takes OGN in this particular way so I say that this is a part of the 2k plus of the 2k plus one linear expression defining the iteration it comes at the k plus one step of the iteration so I don't take the whole iteration I take just the singular part of the iteration and then I just say that my solution is this singular part I was intelligent enough to identify and the all remainder part of the solution I treat it in a deterministic way this is the whole philosophy from my viewpoint of rough parts or what people do on stochastic PDE because what I say has exactly the same formulation in the work of singular stochastic PDE the only thing which changes in my iteration the data will be say zero and there will be noice here but it will be the same structure so what I would like to say today the main message of my talk is how we proceed we take this expansion and we try to be intelligent enough which is the singular part of the solution and if you offend it then we say this is the singular part of the solution and all remainder part I do it in deterministic way so the ideal situation is when there is no singular part of the solution it means that everything is done by deterministic way in many works only the first part of the solution is the singular and all remainder is done in deterministic way that's what we would call the Pratoudibous trick or the bourguant argument for invariant measure in the works by Heier and things on stochastic PDE I claim that there are only finite many particular iterations which gives the singular part of the solution and all remainder part is in deterministic analysis even if it could become very complicated in this case because of kind of integrability we are lucky enough to have contribution of any particular iteration to the singular part of the solution and of course the remainder part is again done by deterministic way so the philosophy remains the same I would say that the novelty in this result is that each Peter iteration or if you prefer each term of this expansion contributed to the singular part of my solution of course as we will see later this works because there is some integrability of some of the resonant part of the equation is integrable that's why this integrable equation contributes to any order and if you take the methods for say Kpz or this 534 number they don't have such integrability but they take several terms and then all the remainder is so this is the main message of my talk what I just said ok so this is somehow comparing to stochastic PD and this is somehow the novelty of the result but of course I should admit that we didn't solve the true problem which would be endless so let us now compare to the work of Bourguin and actually it's more or less the same thing is that Bourguin actually solved this problem for the Gibbs measure so the Gibbs measure is something which is easier in a sense but he did it in 2D so for the Gibbs measure you should divide the Fourier coefficient by some decaying like this so that's what Bourguin did et in fact in his paper he proved that the solution is the linear revolution plus a smoother term essentially it only keeps Q10 and all other terms are done by his complicated Fourier transform restriction but deterministically and this is our structure is quite different we take a non linear function of the initial data plus a smoother term et on realize that actually Bourguin can also do like us but he doesn't because you know I had this correction term in the case of the Gibbs measure it should be divided by N sorry this 1 plus N squared 1 1⁄ and you see that for very large N this is like 1 you know this just because of this decay so our result is I think also true in Bourguin case but of course he doesn't do it because he realizes that the singular part is only Q1 et of course there is a price to pay I didn't say because the philosophy in this work is to choose as much as you need term in the singular part of the solution but if you put more term in the singular part of the solution the probabilistic analysis becomes harder because you take more information that you should be able to analyze that's why it's not very good idea to put too many terms in the probabilistic site because somebody would say why you don't treat all the terms probabilistically because then it becomes untreatable problem from the viewpoint of the probabilistic effect so somehow there is a delicate analysis you should put some something in the probabilistic part but not too much because if you put too much you need to do more probability so what we do in our result is we put more things in the probabilistic part in that sense I think our result is harder than Bourguin from this viewpoint because we have more probabilistic income but the deterministic income in Bourguin result for the 2D NLS is harder because he is with 2D problem and then the deterministic part is harder so somehow the result can be compared like this in what I explained the probabilistic part is more refined but since we are 1D in the deterministic part we should suffer less than Bourguin and in his case he is lucky enough to have this oscillation so he doesn't keep the whole gate only one at that place so less probabilistic information but much more deterministic analysis so that somehow the my viewpoint on these problems ok and so in this analysis we have this new feature that we have no linear so let me just make a digression so when discussing his result TdNLS Bourguin est bien sûr familiar with what he is doing and he says the solution is linear evolution and he is writing this property reminds scattering occurring in some dispersive models scattering means that at the infinity your solution is like a free evolution and you just need to make the vocabulary change you replace decay and you have this property and so amazingly enough this phrase was very motivating for me when I worked on this project because in fact our result in fact in this vocabulary of Bourguin is a modified scheduling result and in fact I don't have time to enter into the details but our proof is very close philosophically to the proof we did with Hani, Pausadère and Vichyla on the growth of Sobolefnau just of course in my brain we replace decay with regularity but then the structure the annulations are very close to each other so it's the same philosophy if you listen to some talk of me on the growth of Sobolefnau here we do the same philosophy of course the technicalities are different but the main viewpoint is the same ok and so let me now go quickly yeah I will go very quickly I have 5 minutes so this is the splitting our nonlinearity can be split by resonant part and no this is the non resonant part ok it becomes technical but this part if I don't have, this is the important restriction N2 is different from N1 entry if I don't have this it is simply the nonlinearity you cube this is the linear, the non resonant part and this is the bad part in fact for us this is the resonant, as you see there is one resonance remaining this one and so this one I should keep it and this one is non resonant because of some this arithmetical property and if I put 4 I gain 2 but I need to gain epsilon here which would be the case if I have 2 plus something but for the true Schrodinger there is no factor here and I don't know what to do if only epsilon gain in the resonant relation is sufficient for our method then so I have my nonlinearity like this to term this is the bad term so the natural answer is to say ok I will just take the solution of the equation where I only keep the resonant part so this is something Sergei and Puschel did a big business many years ago because that's the way you could start quasi linear solutions in the KM theory of Sergei and Puschel that's exactly what they do because if you take the resonant part of the equation that's why I go to infinitely many picker iterations and it's given by this representation so this is the explicit solution of the resonant equation and so what I will you see if I don't have this oscillation I am periodic in time because n square is integer but if I put here some coefficients which are different it becomes quasi periodic in time that's precisely from my view point the basic thing you do with Puschel when you construct quasi linear solutions you just take the integral part it introduces this quasi linear oscillation and you can start by KM theory the remainder part is perturbation so in a sense we do the same thing here but for low regularity solution and so you can see that modified scattering is the same time of mathematics this is what the place we really do something non perturbative by solving explicitly this equation and then ok now if I put the wise toy here all this u0 hat is gn and so the main idea is to say my solution is this term for something more regular and then it during more than one year we were stuck because even if you believe solution it's not clear what to do and so the main point is that one should take a gauge transform so solving the equation for you we didn't succeed to justify this decomposition but we succeeded to justify it after performing a gauge transform and this gauge transform is very nice and I will click and say what she does and I stop so the interest of this transform is that the equation becomes different now the randomness goes to the non linearity so it becomes it's not of course white noise or any random forcing but what I have is that the new equation have a better resonant term and worse non resonant term and so why I don't have time to explain so maybe I stop here so the point is that I should I make this gauge transformation which makes that the non resonant part of the equation is worse because of this random oscillation so this is what makes the life difficult but I am lucky that in the resonant part I erase the initial data which makes that I can now this is suitable for low regularity analysis so this is the point and this gauge transform was once we have this I started to believe that so we make you make the computation here there is a key cancellation exactly the same type as for the modified scattering so here is the place the experience with modified scattering was crucial because the same type of cancellation occurs and then we realize that this point has no non resonant direction ok once you write this we start to believe that this approach work and indeed after suffering some time we wrote a paper which is available and I think I stop here because I am over the time and I think I explain the main message which was over there I am over the time sorry