 Thank you very much for inviting me. This is a very, very great honor. Thank you very much to the organizers for thinking of me. So what I am planning to talk about is very much related to what Professor Kenick talked about this morning and also the other day. So it's the same equation. It's box U is minus U to the five in three plus one space time dimensions. And my sine convention is slightly different. So I hope it doesn't confuse you. It's the focusing critical energy critical wave equation. Now, so just quickly, maybe, if you think, why do you care about this very specific model, the reason is I like to think is because it kind of reflects a lot of phenomena that you encounter in other equations. And a lot of the techniques, which have been recently developed for this equation, have helped in the process of helping elucidating more physical equations. For example, methods by Kenick and Mel have helped me and collaborators address problems about critical wave maps and Maxwell-Kan-Gordon and other equations. And so it seems a very rich source of inspiration for new techniques. So for example, the wave maps from R2 plus 1 into S2 or the critical Yang-Mills equation on R4 plus 1 shares a lot of properties. And we hope that we can then say more things about those equations once we understand this somewhat silly model equation here. Now, so in Kenick's talk, we heard a lot about type 2 solutions. Now, it turns out that for this kind of equation, the type 2 solutions are something very unstable and very atypical. However, for very trivial reasons, for these two more physical models, all dynamics are type 2. And that's why an understanding of type 2 dynamics is such a natural and important thing. So let me just go very quickly over some of the terrain that Professor Kenick covered in his talk. So as we know, the energy critical wave equation is well-posed in H1. So maybe somewhat more precisely, we can say it's strongly well-posed in H1 plus nu for any new positive in the sense that you can then indicate a time interval whose length is going to depend just on the norm. If you only work at the level of H1, then the length of existence somehow depends also on the profile of the initial data. So the equation is strongly well-posed in H1 plus. And so from my point of view, any kind of solutions that you can construct of regularity H1 plus are honest solutions. They're interesting solutions. You don't necessarily have to inquire C infinity solutions. You can also ask for H1 plus, and they're true solutions. They're not some artifacts. So as we know, there is a conserved energy, which has a big use sign. And this is responsible for the fact that you have type 1 and type 2 dynamics, because you do not a priori have control over the H1 dot norm of the solution. So just to remind you, type 1 solutions are those for which the spacetime gradient of you has infinite supremum of its L2 norm on its interval of existence. And again, the interval of existence is the maximum lifespan in the sense of Schroeder-Struwe in the sense that the Lt5 Lx10 norm says is finite, but it gets infinite as you approach the extremities of this interval. So on the other hand, we have type 2 dynamics, which are given by this requirement here that we have a priori bound on the L2 norm of the spacetime gradient of the solution. And as we heard, so the type 2 dynamics and type 1 dynamics are quite, quite different. It's very easy to exhibit type 1 dynamics of singular character by taking the ODE solution and truncating it to ensure that it's finite energy, say. And these were known, of course, for eternity. I mean, this is trivial to construct. However, type 2 dynamics of an interesting character are much, much, much more subtle as we learned and are more recent phenomenon to understand them. Now, so as we know, so there's been a lot of interest in this model in part also because of this Doikott-Skinning-Mell program, which enables to get a complete abstract understanding of all these type 2 dynamics. And also, again, understanding type 2 dynamics here, most likely is going to help us understanding them as well for critical wave maps and young males, which are more physical models. And that's why this is a really natural thing to do. And in particular, for wave maps from R2 plus 1 into S2, where the structure of static solution seems to be much simpler, certainly, I mean, there's waiting to be a very, very neat picture in terms of the soliton resolution here to be uncovered for this model and similarly for the critically young males equation. So OK, so now let's approach these type 2 dynamics. So as we know from the work of the 80s and geometrical works, so people have known for a long time that there are these static solutions W of x, finite energy, time-independent solutions of the corresponding elliptic problem, Laplace-U is minus U to the 5. Correspondingly, we have the family of rescaling. And I forgot to assign, these are the only radial static solutions of this problem. And throughout this talk, I will basically make a radiality assumption. So I will not discuss the non-radial case here. So what is the key quest? So on the one hand, you want to completely characterize all type 2 solutions in an abstract fashion. And you do that by just working in the energy space. And you get these kind of beautiful soliton resolution pictures of Kenny and Mel. But then I'd like to add some other things that you would like to understand, which is well. I mean, can you actually give me solutions that do what's being described in the soliton resolution? And can you tell me what does it mean if somebody takes his computer and runs the equation on the computer? Will he see this? What will he see? What is their stability of these solutions? And what has emerged, I think, in recent times, is that to understand these stability issues for these solutions very much hinges on the topology that you're working in. So if you just work in the energy topology, you'll get a hell of an instability for these problems. However, if you impose on what more regularity, then suddenly these things become more rigid. And you start to get a better idea. You can capture them somehow. And to get a fuller understanding, you have to vary the regularity in these things. And that's very interesting because somehow hyperbolic problems apparently allow more of a range in regularities which get preserved by the flow. Whereas for parabolic problems, it's more rigid somehow. These things have smoothing properties. So you don't have an h3 half solution, but here you do. And so this regularity issue very much influences what kind of dynamics you get. So now, so here, I give you a statement of DKM, the strongest form, which I think is an absolutely fantastic theorem. So it says that if you have a, again, I mean, we heard it this morning, but here is again. So if you have a type two solution which breaks down in finite time, then you can pick scales lambda j of t, which we can arrange in decreasing order such that if you take the limit of these quotients here, they approach infinity as i is less than j. And also, you always have this requirement that these parameters have to blow up strictly faster than the self-similar rate 1 over t minus, big t minus little t, where big t is the blow up time, such that you can then write your solution as a sum of these profiles up to sine plus an error term. And the error term as we learned this morning, you may think of as a solution to the linear wave equation, because if you're close enough to the blow up time, this thing is going to solve the nonlinear wave equation, but it's well approximated by the linear corresponding linear flow. So really, I mean, you can think of this as a linear object here. So you have this beautiful description. And similarly, if you're in the infinite time case, then you don't even have to impose type 2, because automatically, such solutions if they list globally in time, they're type 2. And there's the same picture. You get these parameters. They satisfy formally the same requirement such that the solution decouples as the sum of these profiles plus a linear term. So now, on the other hand, the program of Duncan-Kanek-Mell does not a priori furnish ways to realize such dynamics other than the w itself. And again, it does not give information directly about the stability. However, as I will argue, as I will mention later on, it provides a very powerful tool to understand stability of certain solutions in certain situations. So we have already used, in fact, in some joint work, I've already used this classification to show that you get a certain type of behavior of solutions in a very specific solution. So all right, so now, so my point of view here is I want to have constructive techniques for exhibiting type 2 phenomena. Now you can say, OK, so this is very academic. Why am I doing this? Why do I care about exotic solutions? Well, on the one hand, I think that that allows me to also understand other equations better. On the other hand, I mean, it is really like the techniques that come out of this are extremely useful. I've found some of these techniques that we developed in the context of this equation have been used for quasi-linear equations in the meantime and so on. So it's a very rich source of natural techniques, OK? And also, I mean, it's a big mystery because I think we are still very far away from understanding, really, what are all the possible type 2 solutions and what are their stability properties? How do we quantify that somehow? OK, so now, so let's go back to some very primitive origins. So the first time that I learned about this equation was in 2004 about, this was well before, the Deukenskending-Mell program. And at the time, I don't think very much was known about this equation. People knew the small data theory. People knew about the ODE blow-up. And then there were some numerical results. People had done numerical techniques, in particular, Kyoto-Tabizon. And he had seen that by sort of like taking a nice bump initial data, multiplying that with a scaling factor, and then varying that scaling factor, he could sort of, by fine-tuning, he could realize type 2 solutions, in particular, the W as kind of a threshold solution that divided scattering solutions from blow-up solutions. And so at that point, so Schlag and myself, we were thinking, OK, so maybe we can depart from this W, the static solution, and we can sort of try to understand its stability. And you do this very naively by simply adding a perturbation. You get a wave equation for this perturbation, and you try to understand, can you control that solution? And so it turns out that very naturally, you're being led to a modulation of the W. So remember, W is sub lambda. It's lambda to the 1 half times W of lambda x. And in this, the natural thing to do is to actually let this lambda let it vary with time. And what we could show is that if you put a perturbation, which is very nice, in particular, the perturbation had to satisfy a certain vanishing condition, a co-dimension one vanishing condition. And it had to be compactly supported. That's a technical point that can be strengthened. However, we managed to construct solutions which decouple into a slightly modulated ground state plus a radiation term. And the radiation term actually scattered in a suitable sense. More precisely, we arranged that this modulation scaling factor, lambda of t, converged toward a limit at time infinity. And then you could show that if you write the solution in this form, then this v infinity part scatters to 0. So again, what was the idea behind this construction? Well, you're linearized, obviously, around w. And if you forget for now about the modulation, you will obtain the following elliptic operator here, minus log plus minus 5w to the 4. And you can study its spectrum. You find that if you restrict to radial functions, then you have one negative eigenvalue corresponding to a positive eigenfunction g, which decays exponentially at infinity. And you also have a resonance at 0. This means it's almost like an eigenfunction, but it's slightly weaker. And this simply comes from the scaling in variance of the equation. If you take w, lambda, and you differentiate with respect to lambda, then you get a function, which is the resonance, which gets killed by this operator. If you apply the operator to it, it vanishes. OK. And so now, if you just think in terms of the linear approximation of this equation, then in general, because of this negative eigenvalue, you get exponential growth at infinity. So if you just look at the linear equation, utt is equal to lu, you're being led to impose this natural co-dimension one condition to avoid this, to prevent a solution from blowing up exponentially at infinity. Now, of course, so here we have a nonlinear problem, so you have to take the nonlinear interactions into account. But it's fairly natural to expect that you should get sort of a co-dimension one set of initial data, which corresponds to a good perturbation in positive times all the way up to infinity. There is a technical issue because of this resonance here. The resonance somehow prevents the corresponding wave flow corresponding to this operator from having good dispersive estimates. And so you kind of have also to enforce kind of an orthogonality to this resonance. And what this we do by means of modulating in lambda. So that's why you have to let lambda depend on time. So now here comes in topology. You get an error terms if you modulate in this fashion and you move the error that you make to the right hand side. You get errors which are in some sense very bad. And to control these errors, because you have to integrate at the end in time because of DML formula, you have to make sure that lambda of t converges sufficiently rapidly to lambda infinity. And this requires imposing a lot of decay on this perturbation at time 0 at infinity. And also enough smoothness. So you get this stability up to a co-dimension one set of W only if you work in a very, very, very strong topology. Now, so the technique for this at the time we use these resolving expansions in which Schlag is an expert. However, now we would rather use the Fourier kind of techniques for doing this, which would be somewhat simpler. So again, so here we have to work in very strong topology to kind of avoid the full complexity of possible dynamics that's coming from Koenig and Mel. Delkat's Koenig-Mel. And so somehow we tame the dynamics by working in a sufficiently strong topology near the ground state. Now, OK, so we have purely technically, so we had this co-dimension one manifold here, which passes through the initial data of the static solution. But so the reason why there was this co-dimension one set was just for technical reasons. We didn't really understand if this kind of hypersurface, which I call sigma in the statement of the theorem, corresponding to the solutions in this theorem, if this has some sort of like intrinsic characterization. So from the work of Bison, we expected that this manifold should play the role of some kind of threshold between two types of dynamics, but at the time we didn't understand it. However, a couple years later, actually eight years later, so we realized that indeed one can show directly that if you perturb the initial data on this hypersurface sigma either in this direction or in this direction, so above or below. And you do that by adding a multiple of the unstable, the unstable mode. You can actually trap the corresponding solution in two totally different regimes. Either if you kind of go above, say, then your solution is going to blow up. Or if you go below, your solution is going to scatter toward zero. And to prove this result, in particular, to prove the scattering to zero part of this theorem, we had to use the Deuchert-Skinning-Mell theorem. And OK, so how did we do this? So the idea basically was that you look at the family of rescaled initial data, W sub lambda zero. And so we show that as you perturb your initial data slightly in this direction and you take a small tubular neighborhood here of this one-dimensional family of rescaling, your solution actually, as a matter of fact, is going to move away from this small tubular neighborhood here. And it gets trapped in two possible regimes. And these regimes can be classified by a functional k of U. So this functional k of U is the integral del x u squared minus u to the 6 dx. And so as a matter of fact, it turns out that either the solution gets trapped in a regime where this functional is always positive or it creates trapped in a regime where this functional is always negative. And in the case where the functional is always negative, you can use a LeVine-type blow-up argument to show that your solution is going to blow up. On the other hand, if it gets trapped in this regime here, then you can show, first of all, that the solution is type 2. That's a very simple argument just involving the energy. Second, you then know that if a solution in this regime were to develop a singularity, you would have the classification by Dorker's Kenneken-Mell. And then you show that as a matter of fact, if such a scenario were to happen, the solution would have to kind of come back into a small neighborhood of this one-dimensional family of rescaling. And to rule it out, there is a so-called type 1 pass theorem, which we prove, which relies on a simple virial identity, which rules that out. So this was a very, I thought, a very nice application of Dorker's Kenneken-Mell to control a very explicit dynamics in a situation like this. So here is this result. Still, in a case where you have blow up for these perturbations, there we don't know if it's type 1. It should be type 1, but we don't know how to show this no idea. So now, how about constructing type 2 solutions which blow up in finite time? So there clearly is something very different has to be done than in the preceding theorem with Schlag and from 2004. And the reason is because such a solution will have to be of the form W sub lambda of t of x plus V of tnx. And the lambda here is going to vary a lot. And so we do not expect to be able to very well approximate such functions, U of tnx, as solutions of a wave equation to the fixed reference wave operator somehow. It's not so clear how to do that. So this is a very different kind of situation because before the lambda of t was very, very little. In this case, we expect the lambda of t to be varying enormously. And nevertheless, so you have the following result, which actually goes back to 2007. This is joined with Schlag and Tataru. So there exist type 2 blow up solutions where lambda of t is of the form t to the minus 1, minus nu. So it's a very explicit law, scaling law. Provided, so in this work, we had to restrict nu to bigger than 1 half. And so the way I've written it, so you can decouple the solution into the bulk term. The bulk term concentrates at time 0 at the origin plus a term V of tx, which remains regular. And somewhat interestingly, so as a byproduct of our construction, these solutions are actually not of class infinity. They are of lesser regularity, h1 plus nu halves. So that seems to be something which is really germane to hyperbolic equations. So where did these kind of solutions come from? As a matter of fact, they were simply a byproduct of work that we did on the critical wave maps equation at the time. At the time, we wanted to understand blow up solutions for wave maps from R2, R2 plus 1 into S2. There was lots of numerical evidence that wave maps from R2 plus 1 into S2 develop singularities. And so in 2006, also in joint work with Schloach-Dottaro, we showed that, indeed, you get blow up solutions in that model here. And they're of exactly the same form if you replace w by the corresponding ground state for wave maps, which is a harmonic map. And we realized that this exactly same technique could simply be used in this model here. So we got these funny type 2 solutions. And at the time, there was not the DKM program yet. But maybe they served as a little bit of a motivation to pursue such a program to have such solutions in your hand. Now, so building such solutions is actually not very trivial. So the next type of 2 solution for a similar model was constructed three years later by Raphael and Illeret for the critical nonlinear wave in 4 plus 1 dimensions, which is of this form. And they have a very different scaling law. So their scaling law is lambda of t is t to the minus 1 times e to the square root log t. So this expression here is faster than any power, blows up faster than any power of the logarithm, but it's slower than any power of t. So these kind of solutions somehow they are strictly in between t inverse and t to the minus 1 minus nu, which were the rates that we had before. And then more reasons of, again, after five year laps, there's a recent work by Jacques Gendres who constructs, again, polynomial blow-up rates. It's kind of more similar to the ones that I displayed before in 5 plus 1 dimensions. And he gets a better control on the stability of such solutions and characterizes that in terms of the radiation that comes out at the blow-up time. So now, so the Illeret Raphael result, so it doesn't just give explicit examples, but they also have an argument, or at least they mention in that paper, that their solutions are co-dimension one stable. So again, you get this kind of picture as in the theorem from 2004, but this time for type two blow-up solutions. And so this suggested to Sherlock and myself that in principle, you should be able to extend the result from 2009 here to the full range of news for possible blow-up solutions. And in fact, so this seems even more interesting because somehow solutions with smaller new here, which are closer to the self-similar rate T inverse, they should somehow be more stable. And so in fact, so you might hope that for solutions of this form, maybe you can even, again, establish this kind of co-dimension one stability result for these types of solutions. So explicit examples for the full range of possible polynomial rates, which are allowed in terms of the Deukert-Kennig-Mell classification. This was obtained by Schlock and myself in 2012. So the result is formally of exactly the same form. You get a bulk part which gets rescaled according to his law plus the radiation term. And of course, because nu is smaller for these solutions, 1 plus nu halves radiation term here is of lesser regularity. But it's still strictly better than energy regularity. So this is still in the regime where the equation is strongly locally well-posed. Now, so these types of solutions with small new here are kind of the starting point for what I want to think about what I've been working on very much recently. And I will get to that soon. However, I would do a gross injustice if I didn't mention some of the other developments that have been taking place. So more type 2 dynamics actually have been built. So there has been quite a fair amount of activity. So on the one hand, something which I very much liked. So I was visiting Paris. And I ran into Thomas Deukert. And team asked at the time they were completing the infinite time case of the characterization of type 2 solutions, not blow up. I mean, existing all the way up to time infinity. And he asked if he could build solutions which behave like that. And then so a result by Donninger and myself showed that indeed there is, again, a continuum of such solutions. So this time, these solutions exist all the way to time infinity. And the scaling parameter is t to the nu. Nu just has to be small enough in absolute value. It can be negative. It can be positive. So you can have solutions which vanish at infinity, but don't scatter. Or you can have solutions which concentrate at infinity, get arbitrarily large. But I mean, so they kind of blow up at infinity in some sense. And so the optimal range here is unknown, also the optimal kind of regularity that you can get for such solutions. I would expect you could construct such solutions of class infinity. You could only construct them in class H1. I don't know. So because we had to use techniques from the proof of Deukert's kinetic mail to do this. Anyway, so you can say, OK, so almost all of these examples have these very specific polynomial scaling law. Can one maybe construct more general solutions with more general scaling laws? And an example of this was done by Donninger, Huang, Schlag and myself, where we show that, as a matter of fact, you can impose a little bit of oscillation in these new exponents. In effect, you can do a lot there. I mean, you could do analytic functions here of a certain type. So it's incredibly complicated. Interesting, you know, to construct such solutions, the nu has to be large enough. So it had to be bigger than 2. So somehow, this is concordant with the expectation that the larger nu is, and there's the more unstable these kind of solutions become. And also, I should mention there's been recent work very interesting on multi-bubble type 2 solutions. So far, I've only talked about one bubble. But there's now also evidence for multi-bubble solutions due to Jartek Gendres. So he managed to construct solutions which exist in infinite forward time and which decouple into, I guess, a static term, and then a term which scales exponentially fast, plus a term which decays. And then very interesting, you also, in terms of like finite time blow-up with two bubbles. So he showed that a negative result somehow. So you do not have a blow-up solution with two profiles in finite time such that there is nothing left somehow. So it's exactly at the level of these two profiles. So somehow, in the radial case, right? So it's very interesting maybe there is less variety in terms of these soliton resolutions in the radial case for finite time blow-up, I don't know. Anyway, so these are interesting things. So it's an incredible zoo of type 2 solutions, very complicated. And so the natural question, again, is which ones are kind of the generic ones in terms of optimal generosity that they can have? Which ones you will probably never see. And again, so there is sort of a natural generalization of the result of Schlag and myself from O4, which is in joint work with Nakhonische and Schlag. It goes back to 13, which shows that if you take any kind of any type 2 solution, which is of the form 1 rescaled ground state plus an error term, and you assume that this error term is very small in the sense that its spacetime gradient has small L2 norm on the interval of existence of the solution, then you can construct a stable manifold Lipschitz hyper surface of co-dimension 1 in the energy space passing through that solution such that if you go below this surface, this hyper surface, you will scatter to 0. If you go above, you will blow up, but we cannot say anything about what kind of blow up you will get if you go above. And if you take data on that hyper surface, you will be of type 2. So again, so this reinforces the belief that type 2 solutions in general, in some sense, are a co-dimension 1 phenomenon, very unstable phenomenon, but not that unstable in the sense that you may be able to prove stability of such solutions on this co-dimension 1 hyper surface here. And that's kind of the question that I wanted to address. So let's say you take one of these type 2 solutions. You construct this co-dimension 1 hyper surface passing through it, through its data. Of course, this hyper surface is stable under the flow. On this hyper surface, you get type 2 dynamics. If you go below and above, you get either scattering to 0 or blow up of a probably different type. And now the question is, so what happens if you take your type 2 solution and you perturb it along this hyper surface? So what does this type 2 solution do? Now again, this may seem like a very academic question. But remember, if you transplant this question to critical wave maps, then you don't have a co-dimension 1 hyper surface. You're generic. Your type 2 is anything you get. And the kind of techniques you develop to answer this question may then help you better understand stability of blow up for critical wave maps, for example. And so the result is the following. So this joint worker Schlock is working progress. So assume that you have a type 2 rub solution of the type, which was constructed by Schlock and myself in 2012. I guess, basically, you have to require new to be sufficiently small in lambda of t. And I think new bigger than less than 1 third is enough. So it doesn't have to be extremely small, but just small enough. Then if you take a perturbation, which consists of a triple here. So what does it mean? Epsilon 0 and epsilon 1 are the perturbation projected onto the part that's perpendicular to the unstable mode. And gamma gives part of the initial data corresponding to the unstable mode. Then if you impose a further co-dimension 1 condition on this triple here, then there exists a Lipschitz continuous function gamma tilde, such that the corresponding initial data of this form lead to a finite time blow up solution, which is again of this form here. So it has the same low lambda of t. So basically, what it means is that in terms of this hyper surface, which I mentioned before, this code I mentioned 1 hyper surface, passing now through the data of my type 2 solution, which is a perturb, you get sort of like a foliation of this thing here. You expect to get some kind of foliation here. And so the branch of this foliation that goes through my actual type 2 solution, data corresponding to this branch here, they will result in the same type of blow up. And other branches will correspond to different types of blow up here. So you can say, OK, fine. So that almost seems to imply stability, because of course, if I then perturb in this direction, I will simply fall on a different leaf here. But there, the problem is that the topology has to be strong enough for these perturbations. I need to impose three halves derivatives. And nu is very small. So the solution which I perturb is of regularity just barely above h1. And the perturbation is of much higher regularity. And this means that these leaves that I have here on this hyper surface are infinitely far apart. And so to get a better result, somehow you probably have to relax the lambda of t law. So you cannot impose exactly t to the minus 1 minus nu. You have to let it wiggle a bit, I think. That's probably what you have to do. But still, I mean, it's kind of surprising that you get that stability, because you might as well think you get an infinite co-dimensional stability for this. Because the scaling is so special, lambda of t is t to the minus 1, minus nu. Why should this be stable at all? But it seems to be so, right? So what is this co-dimensional finishing condition? So I mean, roughly speaking, for these perturbations here, I can basically, I mean, it's of course a nonlinear kind of condition, because it's a nonlinear equation. But to first order, you can think of it simply as a condition on the epsilon 1. And you can write it that it's not very complicated. You can write it as an integral has to vanish in terms of its Fourier transform with respect to the operator. It's not so complicated. It's pretty simple. It comes very naturally. So basically, you see it if you simply approximate the equation in suitable coordinates by a linear equation. And you want to prevent a certain growth, which comes from the resonance. And you get a natural co-dimensional on condition. OK. So now I don't know how much time I have left, but maybe I can talk a little bit about the proof. OK. So again, so I say here that probably you can improve this result if you let the lambda of t be a bit more flexible than we do. But OK, so we're pretty happy with this result as is. Now, to understand this result, so unfortunately, you have to have a little bit of an idea how these solutions to Schlock, Tatar and myself, and then to Schlock and myself were constructed. And in fact, it seems like in all of the solutions that have been exhibited so far, there is always a two-step procedure somehow. So the naive ansatz, which succeeded in the O4 theorem due to Schlock and myself, where you simply add a perturbation to your w, and then you just solve the equation for this perturbation, doesn't work here. Instead, you have to sort of make the right ansatz. You have to perturb around the right object. You can't just perturb around w sub lambda of t, but you have to sort of replace that by something which is a more accurate approximate solution. And once you have the right approximate solution, which is good enough, then you can sort of complete things by solving the corresponding wave equation and just using general parametricies and things, what have you. And in our case, so we like to use Fourier analytic methods to do this. And we believe that these Fourier theoretic methods are quite useful also in other contexts, and that's why we like to do it. So the first step, so if you start just with the naive ansatz around which you perturb, it's not going to work. And so what we do, already in the work with Tataru, is that we sort of construct a sequence of better and better approximate solutions. So let's call the sequence UK. So each of these sequences are only approximate solutions, so they will solve the equation only approximately in the sense that this expression here is not going to be 0. But the larger k gets, the better this approximation becomes. So what do we do? Well, so let's say I obtain the approximation UK from approximation UK minus 1 by adding a correction term Vk. And Vk is going to solve a corresponding wave equation which you obtain by linearizing around UK minus 1 in this kind of expression and putting it equal to 0. And so now the problem is these are all wave equations. We don't know how to solve these wave equations. And so we would like to replace them by some sort of elliptic equations which we know how to solve. So how can we replace our wave equation by an elliptic equation? One way seems totally dumb is you simply forget about the time derivative and you get this kind of equation here. That's something you can try. Another thing you can try is that you retain the time derivative, but you throw out the potential term, which is scary because it depends on time. That's what makes it so difficult. So we throw that out. And this may seem very unnatural, but if you think about it, it's not so unnatural near the light cone where this potential term actually becomes extremely small. So near the light cone, this seems to be a good approximation of your equation. Whereas near the origin, it turns out that this is a good approximation of the equation. So then to obtain the approximate solutions, you iterate these two steps here. And now, so this is still a wave equation. So why do you know how to solve this wave equation here? And here what intervenes is sort of a miraculous algebraic, maybe not miraculous, but a nice algebraic structure. It turns out that these terms, these error terms, can be very conveniently expressed in terms of the variable a, which I like to call a, which is the quotient r over t. And it turns out that if you throw out lots of terms from these errors and reduce them to the principal part and express them in terms of t and this variable a, then you see that if you make the right ansatz for this correction, v2k in this equation, which is of this form here, then you get a very nice ODE, which is something we know how to solve. An ODE for this correction. And this ODE, as you expected, is singular, both at the origin and at the light comb. However, if, so this ODE, of course, will depend on my choice of lambda. So this is where the nu comes in here. You will see that there is a nu dependence. This coefficient beta depends on nu and the stage of the iteration you're at. And it turns out that if you choose nu positive, which is exactly the requirement that's given to you by Deukert's clinic mail, the solutions of this kind of ODE here across the light comb there of regularity h1 plus something, nu halves. And that's exactly where the regularity, this kind of finite regularity of our solution comes from. And so it turns out that by, so you can write down a fundamental system of solutions for this ODE and you can do that. OK. So this is all very nice and consistent. And if you then compute the error terms, which you're generating in this scheme, then these error terms, you see they decay faster and faster. This expression lambda times t, excuse me, this expression lambda times t, as t is t to the minus nu, lambda times t is, of course, t to the minus nu. Lambda times t is t to the minus nu. And so if this is something which blows up, and so this is something which becomes smaller and smaller, the larger k becomes. And so as you see, as you let k go to infinity, these errors in some sense disappear. They become, they vanish. And so to get my approximate solution, I now take the sum of all these corrections and add them to U0. Then you could say, oh, fine. So then I just get a series. And my solution is the series of these corrections. But the series doesn't converge because you get larger and larger coefficients in it which grow. And so at some point you will have to stop. Otherwise you get a divergent, just a formal solution. And so this is exactly where we stop this process. And we add the final correction which is obtained by solving a suitable wave equation now. We're using standard wave techniques. And we use Fourier methods to do so. And so why wasn't the original work by Tataru Schlake myself only knew bigger than 1 half? So there's a very technical reason. Because somehow you need to control these terms epsilon to the 5. And if you don't have quite enough regularity, you only have h1 regularity, then it turns out that somehow we had some issues here with these terms to control them. And however, there is a fairly simple fix around this. So to explain this, so where does this fix come from and how does one do this more precisely? So one passes to a new variable epsilon tilde, which is r times epsilon. That passes things to a one-dimensional context. And one introduces new coordinates, big r, which is lambda of t times little r. And tau, which is the integral of lambda of s. And one gets some hideous equation here, which has the advantage that in this equation, the elliptic operator, which is left this time independent. However, you replace, you pay a price, which is that the time derivative becomes this kind of dilation type operator, which is not unlike what we saw in the talk by Masmoudi. So you pay some kind of price and you get some kind of transportation operator instead. And then you have this non-linearity here. But the nice thing is that this operator is time independent now. And so then we pass to a Fourier representations that are associated with this operator L. For that, we have to understand the spectrum of L that's easy to do using standard methods. It has continuous spectrum from 0 to infinity. It has one negative eigenvalue, which we know already. And it has a resonance at 0, which can be written down explicitly like this. And then any L2 function admits a Fourier representation in terms of a generalized Fourier basis like this, plus a multiple of the unstable mode. So then what we'd like to do is we would like to solve this hideous wave equation that I had here by basically using a Fourier representation of epsilon tilde and solving ODEs for every Fourier mode of epsilon tilde, like one does for the usual wave equation in R3 plus 1. However, there is a problem there. And the problem is that RDR does not become psi d psi on the Fourier side anymore. Because these eigenfunctions phi r psi, they're not just e to the i r, e to the i r psi, they're something much more complicated. You can describe them asymptotically, but only implicitly somehow. You don't have an explicit formula for these kind of objects. And in particular, RDR does not become psi d psi. And if you want to translate RDR into psi d psi, you will generate error terms of this form here. Error terms which will be linear in the function that you're trying to solve for, which is bad somehow. So you can do that. You use the Fourier representation of your variable epsilon tilde. And you recast your equation in this way in terms of its Fourier modes. You have the principal part here, which has some kind of dilation operator here applied to x. And then there is something hideous. There is something which depends linearly on x, plus everything that's left from all the nonlinear interactions and so on and so forth. So this linear term at first sight seems horrible because it might seem that you cannot iterate these terms away. You're not gaining smallness in an obvious way. In the work with Tataru and Schluck, there was sort of a wonderful trick that enabled you to do it. It was simply the observation that if you simply force the error at the first stage where you stopped, if you force this error to decay fast enough in time, then by simply integrating over time, you pick up smallness. Just from basic calculus. Now this is a trick which is not going to be useful for us if you try to understand the stability of these solutions because if I add perturbations to these solutions, they're going to mess up this kind of structure completely. So this is not going to work if I try to understand the stability as I announced before. But okay, never mind. And so then in the work with Schluck and Tataru, we then introduce crude norms here in terms of the Fourier transform. And we do a iterative scheme. And the restriction that new bigger than one half comes from the fact that you need to have an embedding of this form, which requires alpha to be large enough because alpha can be determined in terms of new. Okay, this gives you the restriction on your new. However, so the work with Schluck from 2012, we simply observed that this finite smoothness of our solutions is only a phenomenon on the light cone. And in the radial context, if you're away from the origin, you have better so-called f embeddings. And therefore, this issue with not enough smoothness is really a non-issue. And you can just by working carefully enough, you can extend this iteration procedure all the way to new bigger than zero. And you get this result. Okay, so now, all right, but I said I want to study stability of these solutions. So how do I do that? Well, so you take such the initial data of such a solution and you add a perturbation to it. And you try to understand what the evolution of the corresponding solution looks like. And to a zero approximation, you look at the linear part of your equation for your perturbation in terms of the Fourier transform. You get this kind of equation here. And this equation can be completely explicitly solved. There's a parametrics, which we can write down. And as you expect, if you perturb things at a fixed time and then you would solve the equation toward the singularity, you will get things which grow very rapidly, which explode to as you approach the singularity. And in particular, in my coordinates, in my rescale coordinates, you get these kind of expressions here, lambda three-halves tau, where tau blows up as you approach the singularity. These things become very large, so they blow up. And so that's a bad thing. So how can this be useful? Because in the end, you have to do an iteration and you want to control things. However, it turns out, I don't really care so much about the Fourier transform. I care about the epsilon tilde, because the epsilon tilde is what I feed into the nonlinearity. And if epsilon tilde is not too bad, then I can control the nonlinear terms. And it turns out that if you actually study this integral here carefully and you plug in this kind of parametrics, then you find a very natural, in co-dimension one condition. I can write it down for you. It's like the integral from zero to infinity, rho to the one-half xi, x one of xi. So this is just a time derivative part divided by xi to the power three-quarters times sine nu tau zero xi to the one-half. D xi has to be equal to zero. It's a simple condition like that. And if I get that, then as a matter of fact, my epsilon tilde, it will still grow because there's still a contribution from the resonance, but it will only grow linearly. And if you go back into the equation, the way that we set it up, if I can still find it, ha, was before, ah-ha. So you see, like the way that we set up the equation, there's always a factor lambda to the minus two out front. And how does lambda in terms of my variable tau behave? Well, lambda in terms of tau is equal to tau to the one plus nu inverse. So the smaller I choose nu, the more lambda of tau explodes as tau goes to infinity. And this allows me, as a matter of fact, to use this factor to dominate the nonlinear term here if it only, if epsilon tilde only grows linearly, if nu is small enough. And so that's the idea. So, I'm over time, I'm sorry. Yeah? About the first year I'm, you mentioned, about stability for stronger topology. My question is, could you do it in high dimension just in the energy space? So you mean like the one with schlock? Yes, and proof, because you mentioned this first estimate, so could you do it in high dimension just in the energy space? Well, no, as a matter of fact, I would think that you would need more somehow, right? No, actually, you're right, yeah. I think you would need more somehow, more regularity. I don't know, I haven't thought about it, but I don't, it's not obvious to me, and I think I would need more regularity somehow, but, I don't know, but, yeah, yeah. In the solution which blow up in finite time, which you constructed by scaling this w by your lambda t with random power, is there any control of the time of existence with respect to you, for example, parallel terminal? Yes, of course, you could make all that explicit, so you could, I mean, we never do these things, so we just say, but I actually, somebody tried to, a post-doc tried to actually make this very explicit and to actually solve it numerically on a computer even, but it turned out to be more complicated than we'd like, but yeah, in principle, you should be able to make it explicit in terms of new, yeah. You would have to sit down and compute it. I don't know, yeah. I don't know, do you have any intuition for it? Thank you again. Thank you. Sure.