 Well, thank you. I'm grateful for being here and I want to thank the organizers for this nice and interesting conference. I'm going to talk about conserved energies, and this is joint work, conserved energies for NLS, MKDV and KDV, and this is joint work with Daniel Tataro. And before I state exactly what we do, I want to give the equations. So there's the NLS equation. It's I, one-dimensional IDTU plus UXX is equal to plus or minus U squared U. So plus is, in the minus case it's focusing, then there are solitons, in the plus case it's defocusing, and there are no solitons. Then there is the M modified KDV equation, which is UT plus UXX, and I realize that the first thing I didn't want to put attention to, plus or minus 6, what do you want to write it? U squared, I want to write it like this, 2 U squared UX is equal to 0. Okay, now I want to add in a plus or minus, and then it's the focusing or the defocusing one. I want to write it with the absolute value here in order to allow complex solutions, then it's a complex MKDV, otherwise it's the real one. And then there's a KDV equation, which is UT plus UXX minus 6 U, UX is equal to 0. And all these equations are pretty connected, they are integrable, and the integrability shows in the existence of a lux pair, and I want to write it in the following version as a system, psi x, psi is a, let's say x derivative of a map to C2 is equal to, I know I only write it for the defocusing case, for the plus case, I C U U bar minus plus I C C, so this is the lux equation, C is a complex parameter, and the amazing fact is that if you complement it with an equation for the time derivative, is equal to I, I have to look it up because I can't remember these things, plus U squared value U squared to ICU plus UX minus 2 I C U bar minus U bar x to C squared plus U squared C. So, these are two first order equations, and the interesting, the very interesting structure is that if we want to solve them simultaneously for given data at one point, general, then this is only possible if a compatibility condition is satisfied, and the compatibility condition is exactly the defocusing NLS here on this side. So, and the connection to MK2V is the following, so for MK2V there's the same structure with the same operator here, the same equation here, and a different equation on this side, so there's a different time equation for the time derivative, and then there's the same structure, this is solvable for general, for given data at a point, let's say 0, 0, x equal to 0, t equal to 0 for general data, if and only if, then U satisfies the MK2V equation. And for the focusing case, there are some sign changes which I don't want to go into, so the only case, the only sign change here is a minor sign, and here it is more complex, but this equation doesn't play a big role for what I'm going to tell. Now, what is the effect of that? Well, suppose we take c, let's say we say imaginary part of c, larger equal to 0, then we might look for your solutions, I mean it's a 2 by 2 system, so there is a two-dimensional space of solutions, and I want to normalize them on the left. So if I want to have that c behaves like e to the minus i c x, 0 on the left, so the case to the left for x to minus infinity, then on the right, let's suppose that u is compactly supported, then on the right it decomposes in a linear combination of two, the similar fundamental system on the right, then on the right we get the decomposition that c behaves like t to the minus 1 of c e to the minus i c x, and well, here we have to be a bit more careful as x tends to plus infinity if c is a real number. So we get a linear combination of the two solutions, which you would have of a fundamental system, which would have a few would be equal to 0, and I use this to define t to the minus 1, which works well even if the imaginary part of c is positive, and there's a reflection coefficient, this is called the transmission coefficient, and the reflection coefficient, so t is the transmission coefficient. Now the fact that the equation, the analysis equation is the compatibility condition for solving these equations allows us to solve this equation at plus infinity, and check what happens with this plus infinity and minus infinity, and check what happens with the function c. So at plus or minus infinity we don't see the effect of u, so what we see here is a constant coefficient differential operator, it gives the time derivative of c in terms of c here, I know u, so we can check this, and if you check this on the left, then there are some time evolution, if you do it on the right, there's some time evolution, and what we see down is that t of c is independent of time, if you have a solution, if you have a solution to the analysis equation. So whatever, if you look for quantities which are conserved for NLS or MKTV or KTV, we simply have to look at the transmission coefficient, I have to see whether we can define it, and this is a conserved quantity at each point, we may integrate it, we may do whatever we want with it, and this gives a conserved quantity. So now this is some sort old, fairly old, it goes back to, well in this context to Aplowitz, the AK and S, this is Zeguer, Newell, and K-Kalp, sorry, so they did it for the non-inertial equation, and then there is an expansion of t of c, which is equal to minus one over two high i times a sum, this is formal, I should try to make this, this is a formal sum, and then there are the energies Ejc to the minus j minus one, well so this is at c tending to i infinity, or depending on what you assume on the initial data, anyhow it's the Laurent series at infinity, and then E1 is equal to the integral u squared dx, and E2 is equal to the integral ux, sorry u2 is equal to i times u u bar x momentum, and E3, that's some new stupid, I want to have this one, this 0, 1, and 2 equal to the integral ux squared plus 2 times u to the power 4, it's u to the power 4 dx, which one is it? So, and then you can continue, and it's classical to expand these things and to get recursive formulas, and what I want to do is I want to do it in the John Dewey-Pistanio, we sort of interpolated between these quantities in order to get a continuous family of conserved quantities, well that's not exactly what we did, so more or less more precisely we interpolated between the even ones, and not even that, we interpolated between linear combinations of the even ones, so here's what we get, well the things which we define, so let's first look at the focusing case for analysis in MKDV, then ES can be defined as an integral over, well at least for Schwarz functions, for Schwarz potentials as an integral over the real line, over the logarithm, it's always the logarithm which occurs of the transmission coefficient, evaluated at the real axis, so for that we need some decay and some Schwarz functions, and we have to study the decay which we don't do, but let's suppose that the logarithm of t is a Schwarz function, so say a bit more about when these things are defined later, so the one way to defining it is to integrate over the real axis, and to put a weight which is something like the Fourier weight for Sobolev spaces, and c to the 2s, so this is for s larger than minus a half, and then another way of expressing that is to do, to move the contour of integration to the line from i to i times infinity, and that gives the second definite, the second formula, which is equal to 4 sine pi s integral from 1 to infinity, so now we are writing this in a real fashion, torsqit minus 1 to the power s, time this logarithm evaluated at i tau over 2, I see I forgot factor 1 over 2, it should be evaluated at psi over 2 on the left hand side, so we evaluate it at i tau over 2, and then we have to do some corrections, because the weight c to the s goes at infinity polynomially, so we have to correct that in order to be able to do a contour integration, so we subtract the, I messed up, I wanted to call these guys h, so h1, h2, yeah, so we correct, we have to correct the behavior at infinity a little bit, thank you, oh sorry, yeah, so that's the only place where I wrote it, okay, so we have to correct the behavior at infinity in order to get, to be able to work with a contour integration, to move the contour, and if we do it in this fashion, then this doesn't depend on how many terms we correct if we have the regularity, and so we get this formula here, what you see here is that if s is an integer, then the sine pi s vanishes, and here you get the linear combination of the classical conserved quantities, so in that sense we interpolate between the classical conserved quantities, well then we can do the same thing for the focusing, nls if you do the same thing for the focusing nls, then we have to adjust for eigenvalues of the akns operator, so the akns operator has eigenvalues in the upper half plane, and we have to adjust for them, and there's a function, there's a way of doing that, so we have to add a sum over the eigenvalues with a multiplicity mj, over sum, with some function xi, and nothing changes on the right-hand side, now for kdv, that's the same structure, first if s is equal, is larger than minus one, then this is a generalization of the classical formula of a df, there's a different function xi, we have to adjust for the eigenvalues, so the eigenvalues in this case are minus, no, kj squared, and on the line from i to i to infinity, there's a similar expression as for the analysis equation, and again we have to adjust for the classical energies. Something special happens at s equal to minus one, so the left-hand side makes sense for minus one, and for the right-hand side there is this part of the integral, which makes, has the effect of converging to a delta function combined with a sine pi s, so the outcome is then eight times the transmission coefficient evaluated at a half minus this integral u, and here one has to do some normalization, which I don't want to take much about, talk much about, so these are conserved quantities for nls and mkdv, they go back to akns or to fadeev, this relation between the different contour integrals, and these are the objects we want to work with. Now defining these objects for Schwarz functions is trivial, because then the scattering transform becomes something nice, the logarithm of t is a Schwarz function on the real axis, and then it's an exercise in complex analysis, maybe not an entirely easy one to get these formulas, so as I said above, since t is independent of time whenever you have a solution whatever you do with t is going to give you a conserved quantity, so the question is how to relate them to things we work with in analysis, how to relate them to so-called spaces, and I want to give the results first and then explain a bit about the proof, thank you. You see how it works. Yeah, I tried it before, but it's always a bit different when I do it on the blackboard. Is it okay? I guess you still can have an impression of the formulas even if you don't see all of them. So the theorem, that's Daniel Tataru, so what we do is we, I mean we look at the right-hand side, we define the quantities by the right-hand side and then it turns out that it makes also sense with the left-hand side, so the first thing is if u is in hs then both sides are well defined, both sides are well defined, but basically you want to look at the more on the thing on the right-hand side and so the map from hs to es is continuous in u and also in s whenever u is sufficiently regular and it is analytic if i over 2 is not an eigenvalue, but you can't expect it to be an eigenvalue if this, so this is in the focusing case because then this function psi is not analytic at 1, so this has to be excluded. But otherwise it is analytic and it's jointly analytic in s and u in the appropriate sense, so whenever u is in hs, then it's analytic in s for s when es is analytic in u and sigma, u and s when s is less than sigma, so I don't want to write that, but in a appropriate sense it is as all the properties one would want to have it and the nice thing is we can play with both sides in order to get whatever is more convenient. Second, well how does it compare? I mean so this says that it's analytic, so whenever u is in hs then this is defined, but the question is does this control the hs now? Well so we can look at es of u minus u in hs where it's just basically the same as the left-hand side, but for the Fourier transforming value aided at I didn't correct all the xc over 2sc. So this is on the Fourier side similar to the integral on the left-hand side and this is less or equal to a constant times u in hs squared and now I want to use a strange norm here and I want to explain it later on if u in l2 d2 d u2 is less than delta for some delta depending on s. So this tells us that the quadratic part of I mean it's analytic so we can do an analytic expansion in u at zero and the quadratic part of that is exactly the hs norm. So for small if this quantity is small I'm going to say more about that in a minute, if this quantity is small then this quantity and the hs norm are pretty close together. And also what happens in that case is this function psi of the eigenvalue kappa j is larger equal to a constant times kappa j to the power 2 sorry the real part of kappa j, Japanese brackets to the power 2s times the imaginary part of kappa j. So all if this is small then all quantities on the left-hand side of the formula are non-negative. So there's a similar statement for kdv this is nls bigger than minus one half and there's a similar theorem for kdv where there is the same sort of statement for s larger than minus one that's the same statement and if s equal to minus one then it's the same statement but for h minus one we have to take out the potentials so that minus one over four is an eigenvalue of the Schrodinger operator which makes sense because then we want to evaluate the singular function this logarithm at one half and that we can the logarithm at one minus one at zero and that we can't. Okay now some corollaries the first corollary corollary if you use this on scaling then we get that if u zero is in hs I don't specify which problem but if you're in this range then the soup of u of t in hs is less than infinity and it could be made more precise for the solution for nls mkdv whatsoever. The second corollary is well and if you look at solitons solitons correspond to eigenvalues at prisms and anything else what you want to define in terms of the spectrum then they're all stable hs for all s in that range. So this is a question of scaling you might scale so if you look at the formulas now we look at the left hand side then in the left hand side if the smallest condition is satisfied then every term is positive and if you look at the second line then the soliton is the one where the logarithm of t vanishes on the real axis and there is a one contribution corresponding to the eigenvalue so this is the lowest value es can take whenever you have the soliton so this es is a lopinot functional and it's minimized at the soliton and similarly at the prism which is also characterized in terms of the spectrum. Okay so there's one thing on the blackboard which i didn't explain and i don't want to go into it i at least not too much but i want to give some information i have this du2 so what this u2 is it's something like h minus a half h plus a half if you don't get derivative so h a half is the space with half a derivative and it has doesn't have any good structure because there is no embedding into continuous functions l1 doesn't embed into h a half so this space doesn't have the homogeneous space this this space is not good for most things we do in analysis this u2 is a replacement of it and it has all the good properties one would wish to have or many of the good properties one would wish to have from the space with half a derivative and it's still on the same scale it's on the same scale also in the sense that if you look at the best of space then b one half one one embeds into u2 embeds into one half one infinity so it's it's a very close space to this h a half and du2 these are the formal derivatives there are like things with h minus a half same sort of embedding and then there come similar spaces for the same properties v2 and dv2 and there is a good duality between these things so what do you have here you have embeddings into bounded functions you have limits at infinity for these functions you have embeddings of l1 into du2 into dv2 and so on so all the things which don't work for which don't work for h a half or h minus a half many of those things work for u2 and v2 so in particular what is relevant here so if you look at this l2 du2 is defined by the norm so we take a function u and then we multiply it by a smooth cutoff function x minus j so this is something which lives with the bump of size one we take this thing into u2 and this is pieces and then we take an lp summation over the l2 summation over that and similarly with lp so we take small pieces we sum them in l2 and lp and then this l2 if you look at this l2 du2 norm then this is less or equal to a constant times u in hs for all s larger than minus a half so whenever you could possibly hope to get a good bound like that then this quantity is going to be bounded and it's smaller than it's controlled in terms of all double left spaces so I could replace it by any hs here h minus a half plus epsilon and then the state that's the same statement is true so can I ask a question though but if s if you take a u in s between like minus one and minus one half that could be infinity so you have a bunch of things but you don't have line two is that correct and i take u in hs purely in hs but s less than one half minus less than minus one half yeah so then you have the first part of the theorem you don't have the second part of the theorem then no so the assumption s larger than minus one half is for both parts oh i see so nothing of that makes sense i'm not quite sure about the sums for some sides of the equation but nothing makes sense if you and what's the analog for the kdv part of the u space ah so it should have given that so for the kdv part you have to replace you simply replace l2 du2 by h minus one half and then minus one okay um well what i present here is connected to a lot of literature and i can't go uh into all of that i mean the inverse scattering part for this akns system is due to uh up with carb nebel and ziko um fadeev and many others lax i mean i guess there's no point in repeating this the history inverse scattering and i'm sure that there are other people in the audience who can do that much better than i could do it um for the a priori estimate i should say there is a sort of a history which is different there is maybe i don't write it there is there are a priori estimates by christ coliander and tau for the uh non-linear schroedinger equation i did some a priori estimates not uniform in time but control over all for all times somehow with daniel to tau up to h minus fourth for the non-linear schroedinger equation um there is a lot of work on the kdv equation um i did some work on the kdv equation in h minus one with with tristan backmaster some uniform in time a priori estimate in h minus one which used a bit of inverse scattering but not much of the structure uh there is the work which was done simultaneously by kilibh vishan and jang kilibh vishan and jang and uh robin gave a talk on it last week uh on the kdv equation uh for s i think that the talk was for s between minus one less than zero but i guess he also some extension beyond zero probably one yeah what i what i think i said was one that's one uh and you worked with the so you worked with this term here and yeah that is important so um you worked with this term and this gives this allows to show an equivalence of this this single term here with the hs norm the integrated versions of that which lead probably to something similar than the formula on the right hand side uh with a single single correction at j equal to zero which is the l2 norm so uh it would probably give the same formula as this one and that's in the in that range okay so come up again now how do we prove it well the strategy is to bound the to control the integral part on the right hand side this is the most essential part once we have control of the integral part on the right hand side it also gives the control um by limits by certain limits um of the whole right hand side and then if you look at the default crossing part this allows to i mean first we look at the transmission coefficient not in the whole half plane but in the so we start with looking at the smallest condition so we control we're going we try to control the transmission coefficient t of c away from the real axis and the smallest condition ensures that we can rescale things so that whenever we get it somewhere we also get it uh let's say below one half and then at least morally we have a harmonic function the logarithm of t is a harmonic function which in the de-focusing case has a real part uh less than zero so then we are able to interpret this as in such a harmonic function on the half space and then the real part of that is going to be a measure and that allows to define the left hand side so if we use then that in the de-focusing part that t of c is lesser equal to one then the logarithm has the real part of the logarithm has the sign and then we can take the trace at the real part and that gives an interpretation of the formulas on the left hand side for the focus in case one also has to use then the background transform but i don't want to go into that what i want to do is i want to explain how we can prove how we can control the integrals well so we go back to the akns system so the akns system was c one prime is equal to minus i c c one plus u c two c two prime is equal to i c c two plus u bar c one and what we want is we want a solution so that we want c behaves like e to the minus i c x zero at x equal to minus infinity okay so then we can try to solve this equation recursively so we want to get a power expansion in terms of u so in the first step we integrate this equation and then we get c two one is equal to the integral from minus infinity to x times e to the i c x maybe you could take that up with a y y minus x u bar of x e to the minus i c x dx and then we take we look at c the so this stands for the iteration c one two and we get something similar and then so you have a prime i mean you have one and prime so this is that's a one and above it's a prime right so and then we get the first we get t is equal to one plus the limit as x tends to infinity c one one of x plus then we do this iteration and we call this guy t two of c e to the i multiplied by this and then we get t four and these terms have a pretty nice structure so if you do the the math then this t two j of c is equal to an integral and here we have x one less than y one so we iterate this thing up to y two j here we get e to the two i c the sum over the y chase by case minus the sum over the x case times u of y one u of y j u bar of x one u bar of x j dy dx so we do the iteration and then you get a pretty nice representation the imaginary part of c is positive so this is negative and ever this thing is larger than this one and we have the y's on the right hand side so that leads to exponential decay here whenever we are in the upper half plane okay so you do that then you try to control these things and I guess I'm running a bit out of time so I have to decide what I want to focus on so then the first estimate type of estimate is that this is that this t two j of c is bounded by I guess I don't want to go to go into that a constant times the u e to the minus i real part of c x u in l two and I put the imaginary part here c u d u two to the power j so this is where the properties of the spaces d u two come in nicely you have to take to put them also to a part into u and then they give very naturally this estimate here where this imaginary part of c corresponds to the localization in space two intervals of size imaginary part of c to the minus one this is exactly the scale on which this exponential decays so there is this this concentration I wanted to say a bit about that but I guess I'm going to skip that so this is the first estimate and it allows to get convergence provided provided this quantity is small and this is basically where the condition in the theorem comes in okay so far so good then we don't want to get estimates of in in terms of this thing we want to get estimates in terms of hs norms so how do we get go from here to hs norms uh well we can do embeddings we can use embeddings and then we get some variable and we can use embeddings here and look at the scaling parameter and get powers of the imaginary part and this is going to allow to get some of the integrals but it only goes up to a certain point and then we have to look at the logarithm you have to look at the logarithm of this thing and now I want to to uh go to the logarithm and well actually the way we did it I think I did make another sine error um anyhow so we get we want to take the logarithm of the of t to the minus one so this is t to the minus one is the sum of integrals which I write in this fashion so this is the trivial integral this here is the first integral which we had here this is the second integral so I wrote it here and read I hope that it's visible so this codifies the information that x1 is less than y1 less than x2 less than y2 and then we do the integral and then we have this with three things with four things and then if you take the logarithm we get a lot of combinatorics we can split the integration domains into the sets where we order the the variables and this is messy this is this is really messy so I did it for the first part I got the second part explain what this integral stands for in a minute I think I computed by hand these things here up to here and then I was convinced that the structure somehow persists but uh at MSRI I was stuck with that we were stuck with that and didn't know what to do but it was important for us to get this structure let me explain what this structure is so this calls so for every arc which is going up I put an x for every arc which is going down I put a y and then I get an here an ordering that x1 is less than x2 less than x2 x3 less than y1 this is going up that's x4 less than x4 less than y3 less than y4 the indices don't matter but this codifies this integral here the nice thing here is that we always have the last y on the right of every x so this here has a much better decay on the domain of integration as the original integral because the decays exponentially whenever two of the variables are far apart so whereas this this information here leads to decay of order basically at most imaginary part of c to the minus j because this is the size of the corresponding um simplex here we get a decay like imaginary part of c to the minus to the minus 2j plus 1 because the last y is on the right well so the last so we can't split these things into what I said it was not good so but no matter what we do is but this decays exponentially whenever we put things apart and here we could in in these integrals we could have the situation that y1 is close to x1 and y2 is close to x2 but x2 is far away from y1 and then it wouldn't get decay okay pay for that with the combinatorics I pay for that with the combinatorics right so at MSI there was the fortune situation that in this program there was people from probability and I showed this question to Martin Haier and Martin Haier told me that I should look at Hopf-Algebras and that it would be would be related to a famous formula in Hopf-Algebras so my colleagues in Bonn are still when I told them about Hopf-Algebras they were colleagues in Algebras they were excited well but but then I tried to get it into context with the millen amour theorem and that didn't work so the basic thing is it's not not a big effort to show that this is a Hopf-Algebras because it's very close to a standard Hopf-Algebras which I didn't know but that that can be that's not so difficult the Schaffler-Algebras and the millen amour theorem would give the conclusion if that would be a co-commodative Hopf-Algebras but it's a commodative one so it sounds almost the same but it is not but but then still if one tries to do things by foot basically then all integrals are connected then one gets the structures so where are we well we have this estimate here on the t's which transfers to the expansion of the logarithm of t so we get the estimates we need whenever j is large compared to s so if you look at the high order polynomial then things are easy but since we get that we have to look at the lower polynomials case by case and if you look at the conserved quantities then it's clear that something has to has to happen because if s is less than one less than zero then there is no conserved quantity which which hurts at s equal to zero you get the l2 norm so you have to handle the l2 norm this is easy this is easy because with the l2 norm if you check t2 well let's say if you check yeah if you check t2 then this is equal to u in hs squared so that requires simple calculation so the two term is okay now yeah now at the level of the at the next term at the t4 term that corresponds to the next at the at the level of the t t t four term you expect the l4 norm to the power four to show up at the level s equal to one so this has to be taken care of now with this connected structure you get up to this point and the idea to go beyond that is pretty simple this is do an integration by parts in the integration with respect to y1 if you do an integration by parts you get a derivative which falls on this um so if you do an integration by parts with respect to this this exponential so you gain a factor one over c and the derivative falls either on this then you get the same formula with the derivative or you get to the boundary terms and then you have an integral of one dimension less and then you continue to do that and if you end up with no with a one-dimensional integral in the end you get the conserved quantities and otherwise you do similar estimates as i explained before so i guess i went a bit over time i'm sorry for that thank you for the attention so this is the structure of freight home terminal i have no idea freight home determinant no i mean probably it has something to do with freight home determinant i have no idea i guess you know that better than i do i mean i mean in kdv so i knew that on the matrix case before right right for the scalar case right this this is exactly the operator i mean this is exactly the operation passing from the series of t i mean this yeah thing along the series is sort of restricting the technique diagrams just taking the the determinant yeah yeah but the real problem is make was okay so again the matrix sitting i haven't gone through detail but in the scalar setting it's actually a general phenomenon the transmission coefficient is the freight home determinant because the the integral kernel is semi-separable i mean it's a function of the lesser times a function of the greater right and then what is uh yes in that's if you ask that's something then might recognize it because that's exactly what makes processes on the line markovian well right i mean what why is the orange tunnel back process markovian it's because the kernel is semi-separable yeah so i'm not familiar with that with these things but what i think is true is that there's a lot of combinatorics behind which can be expressed in various ways and it should be it should be connected i mean one thing in the with the hope i mean the the reason why mattin hyra is interested in the hopfalje price i think different it's closer to what what i presented here and not not the freight home determinant but um the thing which is striking is there's this theorem it's not difficult to get and it's not difficult to usage in order to compute these things to any order but uh people i asked we are sort of stuck with explaining what the coefficients of this expansion should look like what the coefficients should look like so this is sort of big miracle it gives connected integrals which is what we need but we don't have any access to the size of the coefficients and maybe the freight home determinant to determines allowed to get that more explicitly there's a lot of combinatorics behind which i don't understand i have to go open right there's a stanley's enumerative combinatorics exactly has a theorem which says if you're if you have a generating function generated with coefficients of the number of objects of this type then the logarithm of your generating function is exactly the generating function of the number of connected objects of this type okay that sounds yeah so i would be interested in the reference i'd like to get that here i was happy to get to get that but it's clearly related to many things well just comment on on this estimate i think c is independent of j and if so it's articulating that uh that this is that you're really creating a the transmission coefficient is a holomorphic function of u and u r and these are the table coefficients essentially yes exactly so sort of the uh mckeen trubelitz picture of the world which is objects are holomorphic with regard to comments or questions so if not thank you again