 OK, so let me first define the words in the title, which also gives us a chance to introduce some notation. So that's the nonlinear Schrodinger equation. Defocusing with a plus sign, focusing with a minus sign, which will be morally irrelevant for the entire talk. Which one is which? But plus is defocusing, minus is focusing. OK, and this is the evolution equation for a field. At each time t, it maps rd into the complex numbers. One conserved quantity for this is the mass. So the mass of u integral squared, it's conserved for the flow. And what does mass subcritical mean? What it really means is that conservation of mass precludes interesting nonlinear behavior at small scales. So if u solves the equation above, then so does u lambda. But if you look at how the mass of u lambda varies, what's it going to be? There's a Jacobian factor, which means this. And then we have this factor. So if u lambda is going to have, right, u is some solution which has an interesting nonlinear behavior. If I want a solution that has the same interesting nonlinear behavior at small scales, then I'm going to need to take lambda to be big. But if I take lambda to be big, then the mass is also going to be big if p is smaller than 4 over d. So mass subcritical is a complex way of saying this. So that is the topic of the talk. These definitions will last throughout. Now maybe the most important announcement about the theory of the mass of critical NLS is that it's not dead. You don't hear many talks about it, but there's a lot of things that we don't understand. It turns out. And if you want to get a flavor for what's known and not known, a good place to start is not this talk, but a lot of the early work of Kenji Nakanishi, who probed it quite deeply, building on a lot of earlier work with whose references you can then find in the aforementioned work. OK, so it's not difficult. And this is perhaps one of the reasons it gets neglected is that it's absolutely trivial nowadays to prove that it's globally well posed in L2, or for that matter, in each one. It's a simple application of striccard systems. Before striccard system, it was torture, but now this is not. OK. On the other hand, one of the peculiarities, and if I can be a little bit poetical for a moment, in a sense, looking at the mass of critical, you feel a little bit like Alice through the looking glass. By the way, looking glass, old fashioned would be a mirror. You feel a little bit like Alice through the looking glass in that all the characters are the same, but they don't behave like they used to in the mass critical and supercritical case. In particular, this is ill-posed in the scaling critical sublase space. So it's ill-posed in HSC. Oh, by the way, what is HSC? HSC is the scaling critical regularity, which is d over 2 minus 2 over p, which mass of criticality is the statement this is negative. So rather than explain what I mean, let me prove, following an unpublished paper of Colleen de Christen, and tell the meaning of ill-posedness. Let me give you a quick sketch of what goes on. So imagine you take initial data, which will also introduce one of the key enemies in this regime. Imagine you take initial data, which has height about 1, but is very fat. I mean, it's living truly at scale L. It's not that it's flat in the middle. It's living truly at scale L. And if you calculate, so this is going to be my initial data, U0. And if you calculate U0 in the critical space, you'll find, well, just by dimension analysis, you can guess that this is going to be about L to the d over 2 minus SC. SC is negative. So this is a huge power of L. If L is big, this is big. Now, it's fat, so it's slow moving. And so not much is going to happen for times, so it won't disperse for times, say, so that means the nonlinear effect can have an effect over this time scale. And now, if I imagine, let's consider another initial data. I'd know U1. And what it's going to be is 1 plus L inverse with my final decision of notation here, L minus 2. So this is a number times U0. And you can see that U1 is relatively close to U0. Well, it's about L to the minus 2 times the norm of the initial data, which admittedly is a huge number. But now let's look at times near the end of this interval. So the solution is going to stay high to about 1. So one of the equations is going to have basically a 1 here to the power p, and the other one is going to have a 1 plus L to the minus 2 to the power p. Now, over a time interval of length L to the minus 2, that's going to add up to something. It's going to add up to a phase shift of about size 1. Agreed? The time derivative of an interval length L to the minus 2, they differ due to the phase factor. I mean, due to the size of this, 1 has size 1 plus 1 has size about 1. 1 has size about 1 plus L to the minus 2. L to the minus 2 is small, so we're taking a peak power. Still 1 plus L to the minus 2, or p times L to the minus 2. We care about that. OK, so over the course of this extremely long time, they're going to phase the here until 1, I mean, is more or less not parallel with the other. Maybe some parts of it are parallel, but some parts of it are anti-parallel. What L was big, so L to the minus 2 is kind of small. Ah, that's better. Yeah, OK. That explains at least some of the puzzle faces I have. Probably all the puzzle faces, I'm sure. OK, this. I mean, time has dimensions of length squared, so that's the only thing that makes sense. OK, so that means that at later times, so when T is around about L to the minus 2, these things, I mean, if I put a plus sign between them, at least I'm consistent. At this very much later time, I'd probably get more cancellations if I put a plus than if I put a minus. So at this time, OK, so this is what? Huge solution starts very close, but over an enormous amount of time isn't very close together. Of course, it doesn't smell like it'll pose this at all, right? Take for granted. That is the sort of thing you might take for granted. OK, we could rescale it. We could use critical rescaling, the one written over here. So you can make this time, the time where this divergence happens as small as you want. OK, that's not terribly dramatic. That's sort of scaling critical equations do that, right? Anything that can happen can happen arbitrarily quickly. But it's still a very large solution, OK? A very large solution which just phase decoheres. And now the thing which sort of differentiates the supercritical and the subcritical setting is the action of Galilei boosts, right? So for initial data, Galilei boost is simply a translation in Fourier variables. So I take my totally harmless function and I modulate it very highly. I'm multiplied by a highly oscillating character. The HSC norm, where it's AC is negative, fairly soon I'll start putting in absolute values and mindsets. AC is negative, but I highly modulate my initial data. The norm will go to zero, right? But of course, the nonlinear, I mean, the symmetry of the equation just corresponds to shifting to a moving reference frame. The nonlinear dynamics, the nonlinearness of the dynamics won't change. The same dynamics is just whizzing left to right, OK? So then you Galilei boost to get small data with this effect. So it's globally well-posed in the mass space, but it's not locally well-posed, at least not uniformly locally well-posed, even in the neighborhood of the origin in the critical subspace, OK? And sort of that's one of the enemies, OK? In the mass critical case, Galilei boosts a serious enemy. In the mass supercritical case, energy conservation basically kills it. It never shows up, OK? But in the mass critical, and I'm saying this persistence in the mass subcritical case, the Galilei boost has to be taken seriously. Ronald? Yes? But this only proves the absence of continuous dependence on the. I mean, that's why I put speech marks, because I know that someone's going to object that it's not a very serious one. What it does show you is that you can't expect to write down the same arguments that you do in week one of the, right? So something has to change. It's the looking glass phenomenon. In fact, I mean, let me continue my metaphor one step too far. There is this explicit looking glass here, which is the pseudo-conformal transformation which some sense intertwines the subcritical and the super critical by putting a coefficient in time. OK, what now? So I want to talk about two theorems, one of which well, let's talk about two theorems. So the first one, I want to make a little proposition. Let's take our standing equation. And now let's ask, so if u n, so sequence of functions u n, and u infinity are solutions to NLS, and they have finite mass, you can read the finite mass for the next sentence, the initial data of the sequence of solutions converges weakly to the initial data of this last solution. Then I want to tell you that u n at time t converges weakly to u infinity, which engenders a number of questions of the form y. Why is it true? Why would you care? And maybe y wasn't done before. In fact, it was done before. It's in a paper of Kenji. Not the statement, but the proof. I mean, when you see the proof, you'll see the proof was there before. And the real question, let me try and answer the question, why would you even ask this question? We know well-posedness in the normed topology. I mean, I should say, this statement is exactly the statement of well-posedness, not in the normed topology on L2. OK, now I wish. So we're going to end with this boat hook thing. This is well-posedness not in the normed topology, but in the weak topology. This is both statements here in the weak topology. So why? Well, the first thing is it's a little bit of a test of our understanding. Because well-posedness, I mean, for a linear map, sort of standard exercise in functional analysis, for a linear map, continuity from the weak topology to the weak topology is the same as boundedness, which of course is the same as continuity from the normed topology to the normed topology. But for a nonlinear map, they aren't the same. In particular, strong or norm continuity does not imply weak continuity. Let me squeeze a simple example over here. Well, let me let you think about a simple example while I erase. So an example of a map, which is norm continuous, but not weak continuous, would be something like this. F goes to the absolute value of F on something like L2 of the torus. Characters converge weekly to zero, but the images don't converge. Don't converge weekly to zero. OK, is the other direction true? I don't know. I mean, does weak to weak continuity guarantee strong to strong continuity? I don't know. Of course, in terms of saying this is an interesting statement, it's not relevant, but I don't know. For linear maps, again, it's trivial, but for nonlinear maps, the fat books that I own don't give me any hint. OK, so why am I thinking about this? Why am I telling you about this? Well, because it plays a role in some joint work with Monika Vishan and Xia Yi Jiang, Zhang, I guess I'm first, which is about non-squeezing. So Monika's talked in a couple of places here in the first conference, and also in Berkeley, about the fact that we've proved simple non-squeezing for the mass critical case, and she tried, to my mind she explained, but maybe she tried to explain why it's not a very easy problem. And part of the thing I want to say is that don't apply the methodology there. Here, there is a more direct argument. So I want to show, which sort of uses some of the ideas in that paper, but cuts out some of the horrors that come from criticality. OK, so what is non-squeezing? So I'm going to say, so let's take a ball that's in Hilbert space, center it at z, and have radius r. OK, and z is in L2. There's everything in this first part of the talk is about L2. OK, so I take a ball of radius r in L2 around some point, and this is where my initial data is going to belong. So my initial data is going to be in this set. And then you can ask at some later time, does the solution, u of t, lie in a cylinder? OK, now what do I mean by cylinder? I mean a cylinder with a cross-section is two-dimensional. How do I pluck out two real coordinates from a complex function? What I could do is take the inner product with a unit vector. So this is a unit vector. So that's plucking out two real coordinates and leaving the others alone. And I'm asking, are these two real coordinates in some ball centered at a complex number alpha of radius little r? So now the real question is, is it true that for every initial data in some ball, at this later time, every solution starting there is in the cylinder? And the answer is this cannot happen. So every is this in the cylinder. And the answer is no if little r is smaller than bigger. OK, so in the generality of general symplectic transformations, symplectomorphisms, in finite-dimensional Hilbert spaces, this is true, there's a famous theorem at Gromov. It's the Gromov non-squeezing theorem. But there's no sufficiently general analog in infinite-dimensional cases to cover the PDs of broad interest. But starting with Cookson, the question arose, can we prove this assertion not for general symplectomorphisms, but for the flows of our favorite Hamiltonian PDs? The flow map of a Hamiltonian PD is an example of a symplectomorphism. And the only example you need to care about while we're here. And the answer has been proven to be no, I mean, this thing doesn't work in various cases. I should say the fact that we're working on L2 is because this is the symplectic Hilbert space in which the dynamics is Hamiltonian. You don't get to change this in finite dimensions. This is a given. And for the PD case, it's also a given. In fact, Cookson has some examples where he shows that if you fiddle with that, you can get squeezing. If you don't work in the proper symplectic space, you work in some other Hilbert space. You can't get squeezing. So you have to work in L2. You can't work in H1. So let me quickly talk a little bit about the history without writing anything down. So as I said, the idea of studying this PD was initiated by Cookson, who's talked about, who's proved a relatively general theorem with strong compactance assumption, which covered various Hamiltonian PDs in the circle. And then Bourguin extended the Hamiltonian PDs on the circle on the torus to which this is applied, as well as proving it for the cubic NLS on the torus. The I team, Colleen D'Aquil, Staffelani, Takaoka, and Tao, proved non-squeezing for the Kordovic-Devries equation on the torus. In fact, there were a very long paper proving regular global well-posedness in the symplectic space. And then they had to write another long paper to prove non-squeezing. And that's one of the reasons. I mean, we list various scientific reasons in the papers. But maybe the better emotional reason is you don't understand local well-posedness fully if you can't prove non-squeezing. It's sort of a new level of challenge on the local well-posedness front. And so Hong and Kwan gave a simpler proof of non-squeezing for KDV on the torus. Rumugu, this is not chronological order. Just do what I wrote it down. Rumugu showed that the Benjamin Bahunian Benjamin Bahunian equation is non-squeezing on a torus. And Mendelssohn proved somewhat conditionally that the cubic Kline-Gordon on the three torus is non-squeezing. OK, now, if you listen to what just passed, the word torus appeared in all statements of results. And in fact, we were at Berkeley and listening to MSRI, listening to talk of Mendelssohn, and two people in the audience said, but what about RD? And so, well, what about RD with criticality, right? This is sort of part of the innovation of Mendelssohn. And so we wrote a paper about the cubic NLS in 2D, OK? And as Monika hopefully explained to you, it's long and complicated. And if we go to mass of criticality, we can avoid some of the complexities coming from criticality. And that's the sort of thing I wanted to sketch. Can I ask a question? Yes, please. Why do you need a cylinder whose, what do you say, a co-dimension two cylinder? Because it's hardest. If you can't squeeze into a co-dimension two cylinder, you sure as expletive can't squeeze into a co-dimension 20 cylinder. Fair enough? OK. OK. But it is important that this co-dimension two cylinder is a conjugate pair of coordinates. OK, so if we were on the torus, so what's the basic idea of all these things starting with coaxing? If we were on the, you take your infimensional PDE, you approximate it by a finite dimensional PDE, which is not really a PDE, it's ODE, but you approximate it by a finite dimensional PDE. You invoke Gromov, right? Or one of the proofs of Gromov by the original Gromov statement, or some specific capacity arguments. And you say, well, I have witnesses to non, actually that's not quite how it's done, rephrase in the language that works for us. You find witnesses to non-squeezing in those finite dimensional context, and you need to show that witness persists in the original PDE. OK, so that's our approach. It's not how these previous papers phrased it, it's just sort of estimates in ta-da, but for the whole Euclidean space setting, right? This is the better language, I claim. OK, so if we're on the finite dimensional, if we're on the torus, we could finite dimensionalize the problem purely by introducing a Fourier cut-off. On the Euclidean space, right? You need to first, well, you need to both truncate in frequency and in space, and it turns out that the best thing to do is to truncate in frequency, deal with that problem in and of itself, and then up posteriori truncate in space. But on the torus, of course, you don't need to truncate in frequency. So how would we truncate in frequency? We could replace it by the following Hamiltonian PDE. Let's write this with the L-key of U-N, plus some non-linearity. Let me save myself some bother later and call it F-N. What's this non-linearity? You take the function, you project it onto frequencies less than or equal to N-N. You take the non-linearity of the projected function, but that's not all. And then, right, the point has to be Hamiltonian. If you write down the Hamiltonian with a projected U, when you form the equation, you differentiate it to U, there's a P left over, right? And that P sits out here. So by Gromov's theorem, if we're on the torus, there exist solutions to that PDE which start in a ball and end up outside the cylinder. And what we'd like to do is take N-N and send it to infinity. Now, if we send N-N to infinity, the ball is weakly closed. Well, the closed ball is weakly closed. The limiting initial data, passing to a subsequent weak topology, the initial data will end up inside the ball. So we take our initial datas, and we can take a weak limit and make that our limiting initial data. It will be in the ball. Now, if you're outside the cylinder, I mean, the exterior of the cylinder, it's an open cylinder, the exterior of the cylinder, is also weakly closed. So if I take these things at later time and take a limit, these points outside the cylinder, if I take a weak limit, it's still going to be outside the cylinder. So U in at time t converges weakly to something which we pray is U infinity at time t. So the fact that it converges weakly is sort of relatively trivial. Well, that's the proposition. The question is, is this function actually this function? Is it the solution? And that's exactly the question weak will pose in this. And so on the tourist, this would be a complete solution. On the real line, this is sort of half of the solution. So in the mass critical case, you have a problem which arises from the mass criticality of this equation. It looks like cubic NLS with a low frequency projection, which should make it more harmless than mass critical NLS. But that you need, I mean, if you're going to prove that the sequence converges to anything, you better be able to prove it uniformly bounded, right? So if you're going to treat the mass critical, you need to prove that these sequence of solutions U N are uniformly bounded as N N runs off to infinity, OK? Now, this is a pretty non-trivial coupling constant. In particular, I mean, while as a Fourier multiplier, it has a sign as a convolution kernel, it doesn't even have a sign, OK? So it's not clear how to prove. OK, so you could say, I mean, the simplest approach would be, well, Dodson proved it without the P N, so it's just perturbative. And I'm trying to claim that it's not perturbative because putting a minus sign is not perturbative. Turning this plus to a minus sign is not perturbative. We're doing it for large data, right? And this thing has both sides. It's not clear you get upper or uniform bounds, OK? And a lot of the meat of the paper, where I'm not talking about it, I always do seem to be talking about it a lot, is precisely dealing with this equation, right? You could try and reprove. I mean, you could try and just copy out the Dodson paper, and every time you see the non-linearity, put this in. But then when you get to the fundamental monotonicity formula, it breaks, so you're back to nowhere, OK? So let's prove the proposition. Does the proposition still exist? Yes, OK. So u in at time t is by the Duhamel formula. OK. So what do I do now? I'm going to take weak limits. Well, I would like to take weak limits in everything inside. It's not quite possible. Here it is, right? This is a linear transformation. So weak convergence of the inputs guarantees weak convergence of the outputs. OK. Now what about this? So the Galilei boost is, I mean, if you're working in the H1 theory, I mean, if you're working in H1, H1 gives you spatial regularity, which then gives you temporal regularity uniformly in N. These would actually be equicontinuous functions with values in L2 in time. That makes sense, right? That would give you half of compactness, right? But at L2 regularity, you don't have that property, right? So the Galilei boosts show you can have no, I mean, right? Imagine a highly oscillatory wave packet whizzing by. At one moment, it's there, and the next moment, it's completely gone. OK. So you have no uniform continuity in L2 in time. You could try to go to a negative regularity, but it's ill-posed in negative regularity, so you're not actually doing something sensible. OK. But if you think about what I just said, these Galilei wave packets whizzing past, they don't hang around. They go very fast. How do we prove that they sort of don't hang around? Local smoothing. OK. So local smoothing is enough to show you, after passing to a subsequence, that this will converge to some function in spacetime, locally in L2, locally in spacetime. Once you have L2, you can get high. I mean, it's locally in strict-out spaces, right? So you can go from two, for example, in one dimension, you can go from two all the way up to six, but you can't get six, right? Because you don't have any compactness in L6, so it's scaling symmetry. OK. So again, it's crucial here that this is subcritical. OK. So by compactness here, you can extract the limit. Once you have the convergence of these in suitable Lp spacetime spaces, you can get that this thing, right, the P in ought to be harmless, and after a page it is, this turns into integral from 0 to T. I, T minus S, the class in F, V. OK. Now, are these two things equal? Well, maybe, maybe not. But at almost every time, right, convergence here and weak convergence here, I mean, at almost every time, sorry, the weak limit of this is going to be this. So this is true at least at almost every time. OK. I mean, after all, this topology tells you nothing about major 0 sets of time. OK. But the right-hand side is continuous, right, by subcriticality and stricats. Actually, just by stricats. So if I change V on a null set, I wouldn't change the right-hand side, given how it appears, and I'd make it continuous. So I can delete this just by changing V on a null set, and now V solves the Duhamel equation that U infinity solves. OK. Now, just because it's an L2 solution, it's a continuous L2 solution, in the mass subcritical case is not enough to say it's the solution. You need stricats bounds, right? Basically, because you can't make sense in the nonlinearity for an L2 function. OK. But of course, U in abates stricats bounds, because, unlike the mass critical case, you can prove that solutions of this goofy equation abates stricats bounds by the exact same proof for using the subcritical, in the, when the prediction is missing. OK. So these U ins obey stricats bounds. So V obeys stricats bounds. So V is truly the solution which makes it U infinity. OK. With this stepped out with, I think we'd agree, much more swiftly than the mass critical case, dealing with the space truncation, we basically parrot the mass critical case. There are some simplifications, but it's very similar. OK. So that was topic one. I hope, I sort of hope you're not desperate for a change, but you're going to get one. Maybe you are. OK. What's topic number two? And where do I write it? Wanting to future speakers. This is a disaster. So the second thing I want to talk about is scattering at the critical regularity. OK. Large data scattering at the critical regularity. Let me state a theorem. Me first, I guess. Don't work with Satoshi Masaki. Now in Osaka. Jason Murphy. Currently in Berkeley. Monica Bichon. Currently in the audience. Now, there are a couple of conditions. One is mass of critical, because that's what I'm talking about. The second is I said the word scattering. And unless your non-linearity has a big enough power, that's simply not going to happen. You need a little bit of non-linearity. I mean you need a little bit of power in the non-linearity, otherwise there's no way scattering can happen. And then there's a technical condition, which is not the thing we most eager to change in the theorem. So it's going to stay for the foreseeable future. Well, S is negative, because that's what we're talking about. And it has to be more than minus one. OK. Now, suppose I have a solution to NLS with the interaction variables. I guess you could call them beginning quantum mechanics. The interaction variable has the following property. So if u was a linear solution, this wouldn't change. So we're going to ask that this interaction variable obeys the following bound, because I put a super together for everything. So I have a maximal lifespan solution with the property. I mean, if you can't extend it, you have to stop. But if you can, you should. That's how you define the u. If I have a maximal lifespan solution with this peculiar property, let's call it star, because we're going to talk about it a lot, then u scatters, OK, in the topology sort of written here. So what does u scatters means? It means it looks like a linear evolution. If u looks like a linear evolution, then f looks like a constant. So scatters in the sense that f of t times x to the sc converges to x to the sc times some function of x, but not of t. We'll call f infinity. OK. So the first thing, we're going to devote a fair amount of time to some remarks. Zero's remark. Star, I mean, scattering does imply star, right? If this dude is going to converge, as t goes to plus infinity, say, and really, that's all that matters here, right? This thing happens near infinity, plus or minus. OK. If this thing, if converges in this topology, well, then certainly is bounded in this topology. So scattering clearly implies, I mean, scattering implies star. That's my first actual remark. For small data, if I take my initial data at time zero, and for various reasons in the paper, that's actually a bad idea. But for conversation, we'll do it. If I take my data at time zero, it's in some weighted space. And so in particular, it seems like I'm saying such solutions exist. I seem to be taking it for granted. OK. So it's been known for a while. In particular, Nakaneshi Nozawa proved that this problem has global solutions, is locally well posed for data. Sorry. The problem is OK for data U0 obeying that condition. So x to the sc times U0 at x. OK. This is a scaling critical space. Many of the predecessors had done analysis with an epsilon of room. It's the scaling critical space. And you can see that if I write it correctly. OK. Right. Taking a derivative is reciprocal length. So derivative dsc has the same dimensions as length to the absolute sc. And of course, I made this mistake up here as well. OK. So this is a critical hypothesis. And solutions exist. And what do I mean by this? Local solutions exist for big data. So locally for large data and globally for small data. In particular, small data solutions are base data. OK. So these global solutions for large data. Right. It's only local. So compacting tools. Interesting. But for small data and these solutions, so these are base data. And it's not a very strong assertion. But at least we have no reason to doubt that star holds for all solutions in the defocusing case. Right. If you have a reason to doubt, then please tell me because I have no reason to doubt. OK. In the focusing case, OK, of course, it doesn't hold for all solutions. So star doesn't always hold. For example, you have solitons. Of course, as I remarked number zero, if your solution happens to scatter, then star will hold. But it doesn't hold for all initial data because of solitons. How do we see that solitons break it? Well, for a soliton, right, this has a limiting profile near enough. This thing is wiggling around in a compact set, which means in function space, which means when I propagate it by a large amount, it's going to get spread out astronomically, right? It's going to lose compactness. Basically, it's going to spread out. And if it spreads out a lot, then this thing is going to go to infinity. So star certainly doesn't hold for solitons. Therefore, it doesn't hold for all data in the focusing case, but not a scattering. So one peculiarity again about the mass of critical case is that now solitons are stable, right? At least ground state solitons are stable. Therefore, ground state solitons cannot be minimal obstructions to scattering in any topology, right? Because they have a neighborhood in short space that also don't scatter. Now you've made me not have to do it. Future speakers, it's actually pretty easy. There's ample room. So now, one more sort of general remark. Why are we making such an assumption? Well, if you know about the study of critical equations at non-conservative critical regularity, at the moment we don't really know what to do. Except to assume such uproar rebounds. And there is no conservation law at the critical regularity. So listen. There is something that smells like a conservation law at the critical regularity, which is the pseudo-conformal energy. So the pseudo-conformal energy, which is the energy of the solution of the mass critical, I mean, comes from the mass critical equation is pseudo-conformal invariance. If I take a solution of the mass critical equation and I do a pseudo-conformal transform, I get another solution which has an energy. That's a number. That energy is the pseudo-conformal energy of the first solution. So the pseudo-conformal energy does give something, but you need more on the initial data. You need that a full weight is in L2. And in general, it will only give you something at the critical level if P is big enough. So, you know, for the pseudo-conformal energy to lead to something useful, which I mean critical bounds, it requires that the critical regularity of the pseudo-conformal thingy has to be smaller than the critical regularity. Because after all, if you get something that's even more negative than SC, you can just interpolate, I mean, you can just take a convex, a logarithmic. You do hold, okay, with conservation of mass and bring this up towards zero. So it being too negative is a good thing. It being not negative enough is the bad thing. And, well, if you look at that, that gives you a quadratic equation for P and the solution is commonly called the stress exponent. So, if P isn't too negative, as defined by a quadratic irrational, then everything's, then you can get uproaried bounds from the conservation, from the pseudo-conformal energy at the entity for this smaller class of initial value. This is a much narrower set, well, this is a strictly narrower set than discussed in the theorem. Okay, now, dirty secret. My hypothesis actually embodies uproaried decay. Okay, so if we define this function J, which is the solution of the Heisenberg equation for the position operator, we get the signs right. They're easy if you think about it, but I'm not thinking at this particular moment. Okay, so that's the solution of the Heisenberg equation. If you differentiate it and the following claim for the answer, you'll see that it is the answer. They differentiate both things in time, and you'll see they're the same, and you'll get the signs right as, now I can get the signs wrong again, again, if you think for a moment, you'll get it right, and particularly you'll think for a moment you'll see if I'm right, just by differentiating. Okay, so this is the center of mass intercept operator, I don't know. If you look, I mean this is a conserved quantity for example for the linear Schrodinger equation, it's sort of total logical, and what does it represent? Well, momentum is also conserved, the center of mass goes in a straight line, and at time zero where is it? J tells you. The center of mass at time zero is a conserved quantity, sort of a time-dependent conserved quantity. Okay, and this operator is uniterally conjugate, these are unitaries, these are unitaries, it's uniterally conjugate to your favorite operator which is either multiplication by X or differentiation. So in particular if I want to take functions of this operator, which I do, I want to take an SC to the power of this operator, so I'm going to take a star, it is equivalent to the statement that J, this operator, vector operator to the power of C, I explained how to take functions of this, this thing times U of t in L2 is bounded. Okay, now if you use say this representation and you sub-leave embedding, this will actually tell you that you have some upper e decay which is starting to make the theorem sound less interesting. I'll try and counter that shortly. SC derivative isn't L2, that's sub-leave embedding, right? If I put the absolute value signs. Okay, and, okay, so that actually tells you your solution is in this space. What's this horror? This is Lorentz space notation, this is the weak LP space where this is P and the infinity is weak, right? Lorentz space, L infinity. And that's a scaling critical norm, okay? And in some sense, the theorem says it implies scattering, but you know, if I told you I had a scaling and critical norm that looked like a strict arts norm, right, you'd expect just to prove scattering in a page, it doesn't work. And the problem ultimately comes from this infinity, okay? It's the standard problem. Knowing that an L infinity norm, I mean, knowing that a weak LP norm is small, doesn't mean if you sort of go far enough in time, sorry, it's finite, it doesn't mean if you go far enough in time, that same norm is small, right? This is the classic well-posedness argument for critical equations, right? If an LP norm is finite, then on a small enough set it's small, that's montane convergence, okay? But it doesn't work in weak spaces, okay? And this, of course, this problem reappears everywhere, right? Trying to solve, I mean, local well-posedness for data in critical weak spaces doesn't really work because you can't find anything that's small, right? And we have the same problem here, okay? Now, small data solutions obey this. Why do we know small data solutions scatter? Well, one way of saying why we know small data solutions scatter is for small data, the fact it's actually in this space. Again, the two is a little rince, two, right? I mean, it takes almost nothing to win, okay? But it does take something, okay? So, right, again, this is not, I mean, we're through the looking glass, right? This is mass subcritical, everything's a bit funny, right? For mass supercritical equations, you know, you have critical, upper-ary bounds in L infinity, and you want to get to bounds that aren't in L infinity in time, then there's 50 pages in between, okay? Turning an infinity into a not-infinity can be complicated, okay? So, how do we do it? Well, in some sense, I've given away the model, which is, right, all the papers that have been written on conditional well-posedness, right? Assume upper-ary L infinity bounds prove scattering, right, for a mass supercritical equation, or a conformally supercritical equation. Okay, do you want to talk about wave? Okay, so it's three steps, which again, right? I mean, so existence of minimal criminal, right? And, you know, 1A and 1B is, and it's compactness, almost periodicity, what do you want to call it? Step two, I promise I will not leave this board or use the eraser. Okay, so existence of minimal criminals in their almost periodicity. Number two is we're going to reduce to self-similar, and number three is we're going to show that they don't exist, okay? Now, many papers could be summarized with the following three sentences, but of course everyone is different, and again, right, as I've tried to assert, we're through the looking glass, so everything smells the same, but when you talk to it, it's different. Okay, I mean linear profile decomposition in this peculiar space, okay? What are your defects of compactness? They're a little unusual, they're Galilei and Scali. Translation is broken, and time translation is broken, okay? We'll talk about that later if you care why. What do I mean by self-similar? So normally self-similar means it's a function, if I take u, I can write it as a function of t to the minus one over p, some function of x over root t. So that's what self-similar, without speech marks, quotation marks, means. What do I mean by self-similar with quotation marks? I mean the function depends on time as well, but as you vary this parameter, this function varies over a compact set, okay? So instead of being a single function for the truly self-similar, it varies over a compact set, so it behaves to some degree, like a truly self-similar, except of course you can't use any elliptic theory because it's wiggling in time, okay? And then you want to prove it doesn't exist. So why is there only one type of N and V? Why is there only one enemy? In a sense, this is the only type of enemy that's consistent with what's written over there, right? If you look at the scale of that function in order to be in that weak space, it has to spread out at exactly this rate, okay? Of course, takes easy to trivialize, but not quite so easy to prove. And then why doesn't it exist? It doesn't exist because we can prove that it has finite mass. And of course this solution grows up at zero, but finite mass solutions don't blow up at all, okay? That's the end. Yeah, that's the end. Do you have any guess as to what might happen if the data does not satisfy the weighted or too normal bound? Okay, so... What happens if you take data and do not satisfy this weighted norm? Well, I mean, this is the property of the solution, right? So, okay, so... Let's start with data that does not satisfy that, and... At times zero? At times zero, okay? Yes. Do you have a guess as to how the solution might reflect? Well, okay, there are many ways to not obey this, right? One is we do not decay at all, right? Now, I mean, if you have decayed infinity, which is slower than critical norms, that means the decayed infinity can continuously mess with you forever, okay? And so, probably you can't have scattering, right? If you want a more severe singularity at the origin, well, then it's probably opposed. There's two answers. There's probably more. So, in the sense that you have to do a lot of work because you have this norm, this infinity, right? Yes. So, what is the advantage of your condition instead of just assuming that you have a... The U of t is in the right space. Okay. So, one... I mean, this is a space for which you have local opposites, okay? I mean, it's a condition at a single time, so I can sample the solution at a single time, mess with it a little bit, and then it will construct a solution into the future, right? This, at a single time, doesn't mean much of anything. And if I do somehow choose a time where it means the most thing it can mean, if I make even the slightest modification for the solution, it might not exist at all, right? This is not... I mean, what you want is your bootstrap assumption, your inductive hypothesis, right? You need an inductive hypothesis that actually leads to solutions, right? And that's why sort of taking critical LP bounds doesn't seem to go anywhere, right? In the mass supercritical setting. Yeah, and there's a... Can you say anything about p2 over ea's? It's modified. I mean, you mean large data? Because, I mean, small data, the literature is enormous. Large data, the literature is quite scarce. At this moment, I mean, I have nothing to say. Maybe if I think about it, I will have something to say, but at this moment, I have absolutely nothing to say. I haven't thought about it really.