 So, this is a continuation of my lecture, so let me review very fast what we discussed at an introduction in which I talked about the final state conjecture and then I mentioned that the final state conjecture, which is a general conjecture about the large-time behavior of general solutions of the Einstein equations in the asymptotically flat regime, when it contains many other, it's a huge conjecture which contains many other simplified cases you could say, but each one of these cases are huge conjectures themselves. So, rigidity is the statement that the only solutions of, stationary solutions of the Einstein equations in vacuum are the care family and I talked a little bit about this. Stability is what we are talking about now, which is if you make small perturbations of care, you stay close to care. As a particular case, which is now understood, I talked about the stability of Nikovsky space and sort of the ideas behind. I'll mention more as I go and then today I'll talk about the black hole stability. I started already. The conjecture is this one that if you look at the picture of the care solution, this is a care solution, you look at the exterior of a care solution, this is the horizon, this is cry, you have a space like hyper surface and if you look at the induced metric on the space like hyper surface, you look at the initial data set corresponding to care, make a small preservation. The conjecture is that you are going to converge to another care solution. So, it's another, which is very important because the final states are going to be different from the original states and of course by itself it's a huge difficulty, mathematical difficulty to find these final states. So, as I mentioned, in fact a few times, if you look at, if you look at the, so these are of course the Einstein equations in vacuum, GMA is a care metric. So, it's a care metric, so it depends on two parameters, M and A and if you start varying the parameters, you get solutions of the linearized equations. So linearized Einstein equations and DGM over DM is a solution of the linearized equations equal to zero on the right hand side, which is non-trivial. So, in other words, you get essentially a bound states for the linear equation and the same thing if you do the derivative with respect to A. So, these are non-trivial solutions of the linearized equation corresponding to essentially zero eigenvalues. So, you expect this to create a lot of problems. In fact, you get even more problems because due to the deformorphism invariant you can do variations relative to deformorphisms and you get a huge set of the kernel. So, you find out that the kernel of this linearized equation, Einstein equation has this plus this. In other words, the full dimension of the kernel is actually of the four times infinity plus two. So, it's a huge, huge thing and that of course makes life very difficult. So, now I talked a little bit about geometric framework and let me recall very fast. So, first of all, this is something very general. Einstein equations in vacuum and not just Einstein equations in vacuum, you start with a null pair. So, this is very important because somehow it's a null pair. The null directions in general activity are fundamental. So, you want a geometric description to reflect that fact. So, you start with null pair. So, this is a null vector. This is another null vector and you normalize it so that the metric g is 3, 4 is minus 2. Then, you look at horizontal structure induced by this. In other words, you look at the space perpendicular to v3,4 and this doesn't necessarily, it does not have to be integrable. Sometimes it is integrable and that's very useful, but in general it's not integrable which creates additional difficulty, but very interesting mathematical difficulties. And you define a null frame then to consist of the null pair plus an orthonormal basis of the space, which again does not have to be integrable. So, at every point you have a collection of vectors of this type, two null vectors and then the ones which are orthogonal. Of course, this space is obviously space-like. So, and then you do when you have the frame, you look at the connection. So, you define the connection and as I mentioned last time, you really have to decompose a connection into various components relative to the frame and you give them names and if anybody wants, I can repeat what is the definition of this one. You do the same thing for the curvature, you get alpha, beta, rho, rho star, beta, bar, alpha, bar. So, of course, for those who know the Nuhmann Penrose, it's... So, this is like Psi 2, I guess. I don't know how, depends on how you start. So, this is like Psi 2, Psi 1, Psi 0, these two, then Psi, I don't know. You start from Psi 0, then Psi 1, Psi 2, Psi 3, Psi 4, Psi 5. Exactly. And everything is real here. And everything is real here. Right. I can... Correct. So, this is in a sense more geometric because I don't need to pick up a particular frame. So, this is independent of the... So, all these definitions are independent of this frame that I pick up here. But of course, I can also complex... There is a simple relation between these various descriptions. And of course, it helps when you talk about care, actually it helps to look at the rho plus i, rho star, in fact. So, even in our formalism, this is... Okay. Then you write down main equations. In other words, you write down the Carton equations, which is derivatives of gamma plus gamma times gamma gives you the curvature. So, this is one system of equations at the level of the gammas. And then you have Bianchi identities for R. All right. So, these are... The main equations are the Carton equation plus Bianchi equations. And there's a thing that is important to mention, and I mentioned last time, is the S-foliations. Of course... So, these are foliations induced by E3, E4. Now, if this thing is not integrable, you cannot talk about foliations. But very often, this... For example, in Schwarzschild or Minkowski space, you can pick up the null frame so that this is actually integrable. It gives you two spheres, for example, topologically two spheres. And these are the S-foliations. So, an S-foliation means that, at every point in spacetime, the spacetime that you consider, you have, for example, you have null light cones going in this direction, another null cone going in this direction. And the intersection is a two sphere. Right? So, let's say this is u equal constant. This is u bar equal constant. Then, when they intersect, you actually get a two sphere, S of u bar. And of course, you have a frame then at every point. You have a frame which is generated by the family of light cones. So, you have a frame... On S, you have a frame which is... We call it E4, and the other one which you call E3. So, at every point on the sphere, you have such a frame. And so, S-foliation plays a very important role in the stability of Minkowski space, as I mentioned last time. All right. So, this is a Kerr family again. So, in Bohr-Linckes coordinates. So, these are the coefficients of the Kerr solution. Obviously, we discuss stationary axisymmetric and so on and so forth. Anyway, here, of course, I want to put in evidence that there exists this null pair, E3, E4, which is defined in terms of the Bohr-Linckes coordinates. And this is called the principal null direction. So, it's a principal direction because it has some remarkable properties, which I will review again. Anyway, here are the basic quantities. Again, express now, as we said, for example, the definition of chi A, B is this one. You take EA and EB, which are the ones perpendicular to E3, E4, and you take the derivative E4. Now, this quantity, which has a geometric significance, is symmetric if the span is integral, but otherwise it's not. So, for example, in Kerr, if you take, so Kerr is an obvious example, if you take E3, E4 to be this null pair that I had earlier, then this is not integral in this case. So, the space is not integral. And therefore, these quantities are not symmetric, and you get a lot of components. And again, it's very important to keep track of the components because they have different behavior. The curvature components are defined very easily. Alpha has two E4s, beta has two E4s and one E3, and rho has two E4 to E3, and so on and so forth, by symmetry, if you interchange E3 and E4. The basic equation, again, are the null structure equations, which relate the derivative of gammas to the curvature. And there are some codacci type equations, which are derivative of gamma is equal to curvature plus derivative of gamma plus gamma times gamma. So, this is just a very simplistic description of what the equations look like. And then, the null Bianchi equations, which are equations for components of the curvature, which formally look like this, are derivative in the E4 direction. So, this is the E4 direction, this is the E3 direction. Derivative in E4 direction is derivative of R plus gamma times R and so on and so forth. So, the way to think about these equations, the way we think about both in stability of Mikowski space and stability of black holes, is that the E4 directions can be viewed as equations along geodesics, null geodesics, or null curves, if they are not exactly geodesics. And somehow, if you know already R, so if you have information about curvature, you can somehow hope to integrate this transport. So, this can be viewed as transport equation. Of course, they are more complicated because they could be derivative. This is a derivative on the right hand side. Anyway, but sort of very roughly, one can think of it as transport equations. This one can think about it as some kind of elliptical equations on the lifts of the foliation provided that the foliation is integrable. And then, these equations, one has to really understand them in a very different way. So, these are much more complicated. These are the equations where the sort of the hyperbolic nature of the Einstein equation has to be taken into account. And in the stability of Mikowski space, we said that somehow these type of equations have to be understood from the point of view of doing energy estimates, generalized energy estimates, using the symmetries of Mikowski space or close to Mikowski space in order to derive the estimates and so on. So, this is sort of the main part, in fact, of any kind of construction of solutions of the Einstein equations. All right. So, now the crucial fact in care is that relative to a principal null direction, to the principal null direction, as was E3 and E4, then all components of the curvature are zeros, exception of rho and rho star, which are given simply by this very nice expression. And then, if you look at the Ricci coefficients, again, some of the Ricci coefficients are zero, but not all. There are still lots of components which are non-zero in care. In Schwarzschild, if you are in Schwarzschild, in addition, you get that the C3-4 is integrable. So, that's very nice, because now you can do a little bit of Hoch theory on two surfaces, which is a place of fundamental role in stability of Mikowski space. And then, you have also that rho star, which is this one, is equal to zero. So, in Schwarzschild, you just get one component of the curvature, which is this rho, and which is 2m over r cubed. So, minus 2m over r cubed. So, it's also very easy to calculate. And in addition, you get other components which are zero. So, other components of the Ricci coefficients, which are zero in Schwarzschild. So, these are eta, eta bar and eta. In fact, the only non-varnishing components of gamma in Schwarzschild are trace-kite, trace-kite bar, omega and omega bar. So, these are connection coefficients, which if you don't remember, it doesn't matter. The important thing is to note that many, if you use a principle in our frame, many things really vanish. So, that's why principle in our frames are so important. In Mikowski space, so once again, you go from Kerr, Schwarzschild, and you get even more of a simplification in Mikowski space. In addition, you get that all components of the curvature are zero. You get that these two components of the Ricci coefficients are zero. So, in fact, the only non-trivial components are trace-kite and trace-kite bar, which have very simply geometric meaning. Right? So, that's the situation in Schwarzschild. Is there not a choice in Kerr of differentiated three and four, where they are integrable, the orthogonal thing? Because the boy coordinates half a sphere, so I can take... Yeah, sure, but... The so-called keynotes lay in tetrathons, either. Well... Because you want a symmetry here. In order to still get into the capability, you mean. But you lose some... You lose the diagonalization. You lose and get something else. Yeah, sure. So, you can always... There is always a trade-off that you might want to use. Yes, that's true. But still, I believe that these are fundamental. I mean, maybe you want to construct... Maybe you want to have this principal null frame or something close to the principal null frame, and from it, you construct the other one, which is integrable. There is a null frame, no, which is integrable. Yeah, yeah, yeah. Not certainly. The principal here, what... Wow, did you define principal not for... Principal is in terms of, of course, in Kerr, right? So, it's principal in Kerr, in the sense that the curvature diagonalizes with the exception of Rowan-Rostat. Right? So, all components of the curvature are new, etc. Rowan-Rostat. Okay, I thought there were other frames, which were symmetric between past and future, but integrable better. Okay, maybe I followed. But, I mean, we can talk. All right. So, okay. So, here is now the point of perturbations. So, I want to perturb, of course, Kerr. So, in a simplest possible approximation, I want to think about having a solution of the Einstein equations, where there exists some frame, right, which is close to the principal null frame of Kerr, say, and such that all components, which are zero in Kerr relative to the corresponding frame, are now of Epsilon. Right? So, I have an O of Epsilon perturbation of E3, 4 of Kerr, and then, therefore, this is an O of Epsilon perturbation of these various components of curvature and rigid coefficients. So, this is a definition. So, it's a very simple definition. It's very naive, of course, but that's the simplest I can think of, of what do you mean by an O of Epsilon perturbation of the spacetime. All right. Now, the problem is, of course, you don't know which frame you are talking about. In fact, there are infinitely many frames that you can use. So, if I have one frame, which is good for which I have this, I can make a frame transformation. I can make a general frame transformation, which takes a null frame into another null frame. In other words, I go from E3, 4 to E3 prime, E4 prime. They change like this. EA prime changes like this. And then, these ones will also change, and I get another O of Epsilon. So, in other words, there are infinitely many possibilities, of course, of having frames like this. So, which one do I choose? And, of course, as I mentioned last time, the gauge condition, if I don't have a correct gauge condition, I have no chance to prove stability of the care solution. So, the gauges are fundamental. So, finding the correct gauge is really the heart of the problem. Okay. So, now, the remarkable fact about this, if you look at the way these things transform, right? So, I want to calculate how every component transforms relative to these frame transformations. I find something remarkable. I find that alpha and alpha bar are all of Epsilon squared invariant. In other words, if I change, if I start with a frame, I calculate alpha and alpha bar in that frame, and I make this change, I observe that the difference between alpha prime and alpha is all of Epsilon squared, the same thing with alpha bar, right? So, these are all of Epsilon squared invariant. So, in a certain sense, this do not depend on the choice I make. So, that's very extremely important, as we shall see. By the way, at which stage do you construct a full coordinate system? Because, I mean, the frame is not a solution at the end. Yeah, yeah, yeah. You also construct, right. You do it at the end? Yeah, yeah, yeah. Essentially, at the end, well, you have to do everything at once, in a sense. I mean, you cannot. Yeah, normally in equations, everything has to. So, okay. So, in any case, the other observation is that for perturbation of Mikovsky's space, all curvature components are of Epsilon squared invariant. In other words, if I, of course, in perturbation of Mikovsky's space, also I have rho and rho star are also of Epsilon. And if I do this kind of transformations, it's very easy to see that all components are curvature of Epsilon squared invariant. And this is one major simplification of the stability of Mikovsky's space. All right. And I talked a little bit about this last time. So, last time, I talked about stability of Mikovsky's space. I mentioned, so let me maybe mention a few things about stability of Mikovsky's space. So, the fundamental point on the stability of Mikovsky's space is that exactly because the curvature somehow is almost invariant, it's all fepsian square invariant relative to perturbations. I can look at the Bianchi identities, right? So, we have, say, dr and this dr equal to zero. So, remember that I said that this kind of pair, the Bianchi identity pairs, can be viewed as some kind of Maxwell equation. Maxwell equation. There was also an energy type momentum tensor which has four indices and such that the divergence of it is equal to zero. And then, therefore, you can contract energy norms and analyze this equation like a Maxwell system. Use a symmetry of Mikovsky's space. Use, in fact, approximate symmetries because obviously perturbations don't have symmetries anymore, but they have approximate symmetries. And that's how you treat the hyperbolic character of the Einstein equations in Mikovsky's space. Once I understand this, then I have, of course, this has to be done together with something else, which is a construction of a frame. And you construct by using a time function t and an optical function u. So, in other words, time function which is maximal. So, this is a maximal time function. And u verifies the Iconal equation. So, in fact, actually, we solve the Einstein equation. So, this is important. You solve the Einstein equation together with this u. So, g alpha, beta, v alpha, u, d, beta, u is equal to zero. So, you really have to think that you solve both. And of course, you also solve for t in the original proof of the stability of Mikovsky's space. So, in other words, you have to construct this together. And then once you have these two functions, which they give you a foliation, because light cons, of course, they intersect to time. So, u equal constant intersects to t equal constant in a two surface. And then I can use these two surface and u and t in order to construct the null frame, which is perpendicular to the sections. This would be the section stu. And then you define the connection coefficients from it. And then, very importantly, you construct vector fields. You construct vector fields which are built based on the frame. And you use these vector fields, in fact, to take the derivatives of curvature here. So, you commute the vector fields with this equal to, well, you'll get no error times, and so on and so forth. So, that's more or less what I explained last time. So, the idea, the very important idea behind all this, which I explained last time, is that you can get decay not by using fundamental solution, which is a very complicated thing. And you run into lots of difficulty if you try to use it. But rather, this vector field method, which I'll mention more later. So, it's a robust method, which allows you to get derived decay and at the same time to derive also energy estimates and so on and so forth. Okay. So, in any case, the difference between Kerr stability and stability of Nicoski space. So, some null curvature components will rostar are non-trivial. And as a consequence, you cannot use this Bianchi system anymore. So, the Bianchi system will fail because there will be bound states for it. So, if you try to solve this, you run immediately into trouble. Okay. So, this kind of methodology, unfortunately, doesn't work. So, all other null components of the curvature tensor are sensitive to frame transformations. So, this is what I mentioned earlier. Alpha-alpha bar are invariant, but unlike Nicoski space where everything is, all curvature components are invariant, up to epsilon, of epsilon squared. This is not the case here. Principal and non-directional are not integral, but that's another huge difficulty, which and then you have to track dynamically the parameters of the final Kerr and the correct gauge condition. And this is, of course, the most difficult part. Because you don't know, if you are not in the correct center of mass frame, you don't have decay, and therefore, you cannot conclude anything about the nonlinear equations. You cannot close. So, finding the correct center of mass frame and finding the way to track down the final parameters is one of the main difficulties in proving the stability, which, of course, you don't have in Nicoski space. And then finally, this is another thing, which is that even if you look at very simple equations, the wave equation in Kerr for scalar phi, they just look at this, the simplest possible equation, which you can think of it as some kind of simplified linearization. I mean, it's much simpler, of course, than the full linear system satisfied by the linearized Einstein equations. But you can start looking at this, and this already has lots of difficulties, as I should discuss. All right, so these are the things that you have to worry about if you want to prove the stability of black holes. All right, so are there any questions? All right, so let me continue then. Okay, so there have been a lot of progress, and I'll try to talk about the most important steps, and of course, I will not be able to review absolutely everything that has been done. But in my view, the most important conceptual contributions to the understanding of the stability problem starts with Tchaikovsky, which proved in 1973, again, based on the work of many other people from before, which I'm not going to talk about. But in any case, he showed that these extreme curvature components, alpha, alpha, bar, which are all epsilon-squaring variants, so they are already more interesting than anything else, they verify, again, up to all of epsilon-square error terms, decoupled linear wave equations. Right, so in other words, in linear theory, you get that these alpha, alpha, bar verify some equations. Of course, the equations are not so simple, because you could have terms of first order, and you could have terms of second order, when you have derivatives multiplied by something. Okay, so these are the equations, and of course, something similar happens for alpha, bar. So, but in any case, they are not coupled with the other components of the curvature, and they are not coupled with the other components of the richer coefficients, and so in that sense, these are quite remarkable. And it turns out it's not so difficult to show that. It has something to do with these invariance properties of alpha, alpha, bar. But the unfortunate thing about this alpha, alpha, bar is that the equations are non-conservative. You cannot find a good conservation law here. They are not derivable from Lagrangian, and therefore, from the point, I mean, they are useful, for example, in terms of showing that there are no exponentially growing modes. So, just you can analyze this and show that there are exponentially growing modes, but this by itself, as we discussed many times, this is far from being able to do anything in terms of nonlinear equations. So, but in any case, this has led to Whiting's result of 1989, where you showed that the Teukovsky linearized equations have no exponentially growing modes. Some of it has been done by Teukovsky for a few modes, but Whiting was able to do this for all modes. I think this was the main contribution of Whiting, and then later was Jacob Schlappenthal-Rotman in 2014, where he actually proved a slightly stronger result than Whiting. He showed some kind of quantitative mode stability for this equation. This, in fact, was then used, this result was then used in this remarkable work of Daphremozroniansky-Rotman in 2015, which used a new vector field method I'll mention in a second, and Jacob's result here to deduce quantitative decay estimates for this. So, what I mean by quantitative decay, remember that I said many, many times, is not enough to just show that these equations are well behaved. You have to actually derive quantitative decay, which you can later hope to use in order to close the non-linear terms, because you have to control the non-linear terms, for that one you need decay. Anyway, so these type of results for this, I'll mention a little bit more in a second, so maybe I'll continue right now. The fact that there are transformations between the Tarkovsky equation and the derivative, I mean, the Rajeev wheeler. Yes, I'll mention it in a second. Yeah, yeah, no, this I'll mention. All right, so this is a, this is a, I'll call you the first sort of important conceptual breakthrough. Another important breakthrough is the classical vector field method, so this is, so while the first one is due to physicists, this one is due to mathematicians, classical vector field method is a non-perturbative method based on using the continuous symmetries of Mikovsky and adopted higher order energy estimates, which you build by using these symmetries to derive robust uniform decay and peeling. So this is what I discussed last time, that if I have, say, in the simplest case, just the wave equation Mikovsky space, of course, you could derive the decay properties of the solution using the fundamental solution, but that is very hard to reproduce if you have perturbation of the metric here. In exchange, the vector field method is a method where you commute the dimensional phi with a class of vector fields, which corresponds to the symmetry of Mikovsky space, do energy estimates and then from the energy estimates you get the decay. So this is sort of a robust way to get decay without doing expansions or anything of that sort. All right, so that's, and that led also to peeling, so in other words, you don't just get the decay estimates, you also get that various derivatives of the solution of the wave equation has different decay properties and then, and this method also can be generalized to the Maxwell equation and more complicated system of equations where you get the peeling corresponding to that, again without doing any, without using the fundamental solution in any way, you're just using the symmetries of Mikovsky space. And then there was another important thing, connection with the classical vector field method, which is a null condition, which is a structural gauge-dependent condition on the quadratic part of a nonlinear system of wave equations, which ensures global regularity. So you identify a certain structure of the quadratic value, it suffices to look at the quadratic terms in the nonlinearity, and you can be merely see, it's very easy to see whether the null condition is verified or not, in that system of coordinates. But in general, and I mentioned last time, if you have more complicated system, this very much depends, this null condition depends on the gauge choices you make. So in some gauge you can have a null condition in another gauge you may not. So this is a gauge-dependent thing. Then there is a nonlinear stability of Mikovsky space, which is based on these ideas, which uses generalized energy estimates, approximate symmetries, and this allows you to get decay estimates for the curvature tensor, and once you get decay estimates for the curvature tensor, you also get it for the gammas by using the Carton equation. Anyway, so that's roughly the classical vector field method in a nutshell. Here you use the fact that they are killing vectors of the... Mikovsky Can't you use the fact that there is a killing tensor for curve? Well, people are starting to use these sort of things, but it's not yet clear how far you can go. Yes, absolutely. But this has been used. So blue and Anderson have used these sort of things. All right, so now so this is now a new vector field method. So when you talk about what you talk about black holes, the situation is more complicated because you don't have enough symmetries. So the symmetries of the care solutions are much fewer than the symmetries of Mikovsky space. So the same sort of things. You can still use the symmetries, but that's not enough. You have to use something else, and that's sort of what the new vector field method is. Again, this was developed by mathematicians in the last 15, 16, 17 years, and it's based on... So let me try to explain a little bit because I think it's very interesting. So let's look at a picture of a black hole, right? So this is exterior of the black hole. This is a horizon. This is cry. I'm looking at the region maybe slightly inside the horizon, I mean inside the black hole, which is r equal rh. So let's say that this is a... For simplity, it's schwarzschild. So I mean schwarzschild. This is r equal rh. It's r equal 2m. This is a horizon r equal 2m. Then there is this other thing here, r equal r star, which is r equals 3m, right? So as I mentioned earlier, this corresponds to null geodesics which stay here forever in the middle. And then there is scry and so on and so forth. So if you look at just... Suppose you want to analyze just the wave equation, but in schwarzschild, right? Equal to zero. Just for the simplest possible linear equations that you want. And you realize that you have a lot of new difficulties that you didn't have in Minkowski's space. The simplest is a difficulty exactly along the horizon, which is due to the fact that on the horizon, the vector field dt, let's call it capital T, becomes null. So if you look at the energies associated to this, the energy is always constructed based on this vector field, on the d over dt. If you look at the corresponding energy, you get a degeneracy. So you get a degeneracy of the horizon. So then you have to do something. So the other thing that you have is this drab null geodesics. This drab null geodesics lead to a huge difficulty, which it's natural because you expect in geometric optics, you expect that there has to be something wrong here. And in fact there are... If you look at the energy estimate that you want to do here, you'll see that there will be a degeneracy here, exactly along r equals 3m. And then of course there are all the issues at infinity, but in this part you can argue as in Minkowski's space. So the new methodologies that people have discovered is that somehow it pays to still look at vector fields. So it's still based on vector fields. You don't use a fundamental solution. It's still based on vector fields, but you construct a new class of vector fields, which are not necessarily causal. So for example, you can find a good vector field which takes into account the region near the horizon. This is called the red shift type vector fields, which were discovered by Afermas Ronjanski. I mean, they've been used by Afermas Ronjanski. Then there is a region here at Alicola Asta, which is really the most difficult because of this degeneracy. So this has been taken into account by lots of people in the last 15, 16 years and has led to methods in which... Excuse me. So there will be methods based on so-called the Moravets vector field. So these are global estimates for solutions of the wave equation which degenerate here and degenerate here. And because of this, you have to combine this type of vector fields. You have to combine it with vector fields which are good here in this region and vector fields which are good here in this region. And this is sort of a much more engineering type of approach than the old vector field method, which was more global. Here, in every region, you find something that works and then you put them together somehow. And you always have to have something which is global. In this case, it's a Moravets estimate. But again, the Moravets estimate may degenerate here and here and therefore you have to combine it to something in this region and something in this region. So again, as I said, this region is more like in the case of Mikołski space. So anyway, so this was sort of a design type of vector fields which led to the ability to prove results for the wave equation in Schwarzschild. But if you have care, it's even more complicated. So if you study Schwarzschild, you have care. It's even more complicated because near this region, near the horizon, there is in fact an entire region which is called the Ergo region which I mentioned a few times in which this becomes actually a space like. And that leads to many more analytical difficulties which have been resolved, in particular they have been resolved in this result that I mentioned here. Okay, so that's a situation. The new method had emerged in these last 15 years in connection to the study of bounderness and decay for this type of equation. There are many partial results starting with software in blue in 2003. So this is already 15 years and then many, many others. But the final result was proved by Daffer-Motor-Rodnowski-Schlappenthal-Rotman and which deals with a full range A less than M in care. All right, so now, third breakthrough, important breakthrough in our understanding today is the result, so this is what you mentioned, the result of Chandra Sekar, that there exists a transformation which takes this alpha, which verifies the Teukovsky equation, which is non-conservative. It takes it into a tensor, a new tensor P, which can be calculated from this. In fact, it involves two derivatives of alpha. You need two derivatives of alpha to get P and this P verifies a regular type equation. In other words, it's wave equation at P plus a potential times P is equal to zero. All right, so this is an observation which was already made by Chandra Sekar, but the full use of this transformation in terms of actually getting real estimates is due to Daffer-Motor-Rodnowski-Schlappenthal-Rodnowski in 2016, where they get a physical expression for this equation, starting, I mean, the equation, the Chandra Sekar transformation was based on modes, but anyway, this is maybe not so important. The more important thing is that they used the equation, they used their methodology, the methodology that has been, that has been, that has emerged in these last 15 years, that I mentioned earlier, in order to analyze the decay rates for P for this equation. Right, so they get uniform decay rates for this, and then once you have, so once you have a full understanding of P, you can go back, you can revert and go back to alpha and get estimates for alpha, and of course also estimates for alpha bar because there is something similar from alpha bar to P. Anyway, so, and then this is used as a first step to prove the linear stability of Schwarzschild, which I mentioned in a second. Okay, then recently there have been even more interesting developments, which is that something similar can be done to control the Tokovsky equation, even in carry phase, sufficiently small. So for sufficiently small, there is now a way of complementing this observation of Czandasekar, or something slightly more complicated, but which still allows you to analyze and to get the decay estimates for this type of equation where you are already, sorry, for the, in other words, you start with alpha Tokovsky equation and you get a system, you get something more complicated, which you can analyze and then you can go back and get the estimates for alpha. Okay, so this can be done now for Kerr for small a, which is clearly very important. By the way, contrary to Rodnian's key, Tokovsky is written with a Y at the end. Ah, okay, sorry. Okay, good. It will change. All right, so these are again new results of Daphermos Holze-Garondianski and a student of Andersen in last year called Ma. Okay, so now linear stability of Schwarzschild. So this is, once you understand this type of transformations, you can show that the Schwarzschild Kerr, so another way, zero, when a is equal to zero, so Schwarzschild space is linearly quantitatively in the sense, quantitatively in the sense that you get real decay estimates, uniform decay estimates, which is immensely important if you want to do non-linear equation, non-linear theory. Once we mod out the unstable modes related to this two parameters family of nearby stationary solution, this is what I mentioned at the beginning, and linearize gauge transformations, right? So once you, so in linear theory, of course, it's much easier to do this. And, but once you do that, you can show that everything decays appropriately. You get bounce and decay for all the quantities. Okay, so this is done by using the Schwarzschild transformation. You derive from it, you get an equation, which I mentioned you can analyze, and then from it you get alpha alpha bar. And then once you have this, then you see, so this doesn't, these are gauge independent in some sense, alpha alpha bar, at least in linear theory they are completely gauge independent, but not the other quantities. So the other, the other curvature quantities are not, and also, of course, the Ricci coefficients. So reconstruction means that you have to now find appropriate gauge conditions. So this, this is only going to work if you now impose gauge conditions. You find appropriate gauge conditions to derive bounce and decay for all other quantities of the linearized Einstein equations of Schwarzschild. So this, this is basically what's done. All right, so now there are some additional results based on different approaches by, by Hume Keller and Wang in 2017. And then based on wave coordinates in Hume Johnson in 2018 by Hume and Johnson. Okay, so summary of what we understand so far. We have tools to control in principle the main curvature quantity P. So remember P is obtained by going from alpha to P by, by sort of a second order operator. All right, so one, so this verifies a nice equation. But of course now in nonlinear theory, so this Chandrasekhar transformation verifies a nice equation here with zero in linear theory. But in nonlinear theory, of course, there would be huge number of tens on the right hand side, which are very complicated in fact. So, but at least we know that they are going to be quadratic. So they are quadratic in small quantities which vanish in, in Schwarzschild. So okay. All right, so, so this we have tools in principle to control the invariant quantities alpha, alpha bar. Because again, if I know P, I can go back to alpha, alpha bar. So what remains to be done? Find quantities that track dynamically the mass and angular momentum. Find an effective dynamical method to fix the gauge problem. Determine the decay properties of all important quantities and close the estimates of the full nonlinear problem. So now what time is it? Let's see. Okay, so I can, I can go on. All right, so, so let me talk a little bit about the nonlinear problem. So, okay, so going from linear to nonlinear. So we understand something about linear stability of Schwarzschild. And now we, we want to go to nonlinear. And in the first approximation, you may also want to do first Schwarzschild because it's a little bit simpler. As we, as we have seen Schwarzschild is much simpler. Okay, so now the, the major difficulty, I mean there are lots of difficulties, of course, to go from linear to nonlinear. But in particular, one of the unpleasant things about doing nonlinear, the nonlinear theory, nonlinear stability of Schwarzschild is that if I start with initial data which are close to Schwarzschild, I'm not going to converge to Schwarzschild. Again, I'm going to go to the, the final state will also have a, will also have an angular momentum. Right, so the final state, f and mf, even if I start with zero m here, a perturbation of zero m, I will converge to, to a final state which has an angular momentum. And therefore, I cannot really study, it doesn't seem like I can study stability of Schwarzschild without understanding the full stability of Kerr, at least for small a. And then Kerr has many other complications and, you know, you'd like to separate complications because otherwise you will never be able to do anything if you try to do everything at once. So, so we really want to separate Schwarzschild still. So the question is, is there a way to impose conditions so that the final state is still Schwarzschild? Okay, so it turns out that there is a simple way to do that, which is by imposing some symmetries on, on the solution. So symmetries, and that's what I want to talk now. So I want to assume that my initial data have certain symmetries. So in other words, I, I want to look at the restricted stability of Schwarzschild. And let me recall a little bit how you take into account symmetries if you are in general activity. So for solutions of the internal equations. So let's assume that we have a space time which verifies the vacuum equations. But I want to take, I want to assume also that there is a Keating vector field Z. Okay, so Z would be a Keating vector field. And I want it to correspond to a rotation in fact. Right, so assume that I have a rotational Keating vector field. Then there is a sort of a very general, very general construction, which is done by taking this G of Z, Z which you call X, and forming an unspotential. So unspotential is a scalar X plus I, Y. Y can also be defined very easily by, Y can be defined by taking the derivative of T. And then you actually take the Hoge-Duel and you multiply by T again. And that gives you, that will give you the Y. T is Z. Yes, sorry, Z. So Z. Okay, so and then once you have that, it's very easy to see that this combination which is called the unspotential verifies a wave equation. So you get X times the rotational phi is d mu phi times d mu phi. And moreover, you can also show that the original metric can be reduced into a component H which is now only two plus one dimensional and this scalar, I mean this complex scalar phi. And that together they verify a system of wave equation, a system of equations like this. So Ricci of the reduced metric is expressed in terms of the phi and the wave equation with respect to the metric H of phi verifies an equation like this. All right, so this is the simplest thing where you assume axial symmetry and you get a simplified system of equations. But this is not good enough because in reality I want to start with Schwarzschild. In the case of Schwarzschild, Y would be zero. And in fact, it turns out that if you start with Y equal to zero, you stay zero. So that's what is called polarization. So axial symmetric polarized means that I also assume that Y is equal to zero. And then if you start with this component zero initially, it stays zero for all time. That's easy to see. And therefore you ensure that you stay polarized for all times. And in that case of polarization, the metric, the spacetime metric takes this very simple form, which is x times d phi squared plus GAB dxA dxB. In other words, it coordinates t, r, theta and phi. So you see that relative to the metric, this is completely decoupled for the other components. And therefore I can think about this as being my true metric now, which is only two plus one dimension. It's Lorentzian and it's two plus one dimension. So there is some kind of reduction to a lower dimensional situation. And the equations for the curvature of this metric, so I'm looking at r for this metric, so which is a reduced metric, is coupled with phi through this simple equation, dA dB of phi times this. And the version of phi is very phi z. So this is a kind of coupled system that you have to satisfy. You also see that the scalar curvature of this metric is equal to zero from this very simple fact. And this is what I want to do. The important thing here is that you stay in Schwarzschild. So if I start in Schwarzschild, where Y is indeed equal to zero and I stay in Schwarzschild. Okay. So this is a simplification which really allows me to talk about Sabito-Schwarzschild. It turns out that, actually as we shall see, it turns out that these equations are really not important. I mean, you might think that this is what I should use now because I have simplified equation. I have just a simple wave equation here. And then all the metric g can be recovered from this equation if I know phi. Of course, this system is coupled, but it turns out that actually it's not very helpful. And everything that I'm going to discuss now is done in general, more or less. And only you have to remember that at some point in some situations, I have to take into account polarization. So polarization is going to be used, but most of the time it's not used. Okay. So most of the time I have to use the same kind of thing that I would have to do in any other stability of care, for example. All right. Now, okay. So this is, maybe I state the result and maybe we can take a short break. So here is a result that I have with Jeremy Seftel, which is at small axial polarized perturbations. Right? So axial polarized means, in this sense, of a given initial conditions of an exterior Schwarzschild metric, have maximum future developments converging to another exterior Schwarzschild, which is given by this final mass and infinity, which is of course different from the one I started with. And this is a picture I'll talk more about after the, after the break. This is a picture of the space we construct. So you, actually, we start with the initial conditions on two null hyper surfaces. Okay. I'll explain why we are allowed to do that after the, after the break. So you have to imagine that you have initial data here and here. And you construct the spacetime all the way to scry. The scry is complete. From scry you see the horizon. So once you, of course, the horizon you can only find out after you constructed the whole spacetime, because it has to come from this point at infinity on scry. So you construct the horizon. Here you have, let's say, something like an apparent horizon, maybe. And then in addition, I have to construct this time like surface T, which has a certain role which I'll mention next time. So that's basically, that's basically the Penderl's diagram of the spacetime we construct. And now I think it's a good time to take maybe three or four minutes break and then I'll explain this result. Yeah. Okay. So this is the statement. And I'll try to describe more in details what's going on. So the geometric features of the construction is, first of all, you have an optical function which is U. So U, optical function, I recall, is a solution of the Aikonal equation. But of course, whenever you talk about, maybe I should erase since here because too many... I don't know yet the solution. So being optical means what exactly? Yeah. So let me explain. So you have, so as I mentioned earlier, we have to solve the Einstein equation together with some, sorry, g alpha beta d alpha U beta U is equal to zero. And in fact, there will be two such functions, U bar also. Namely, the way to think about it is that somehow you, see, when I solve this equation, of course I have to initialize it somewhere. This one, we initialize it on the initial data, right? So because you have an initial data set. But this one has to be initialized also somewhere. And we initialize here at I plus. Now, in reality, you see that it's actually initialized in physical space. But in a first approximation, you can think about it's being initialized here on I plus, on scribe. And then the level sets of U are at 45 degrees. And they go all the way where they meet this T. So T is some time like surface, which is not too far away from R equal to 2M, which is 2M0, which is the original Schwarzschild, right? So we have, we can still talk about the original Schwarzschild. This is of course not the event horizon anymore of the space that we construct, but it's an event horizon of the one we started with. So T is somewhere to the right of the horizon. Yeah, and it would be below 3M, right? Yeah, we make sure that it is below 3M. Though it's not that important, but in any case, yeah, we certainly take it below 3M. And so again, U is initialized here, okay? And then from here, when I reach T, I start the U bar, which goes in this direction. So of course, we cannot go all the way with U, because you would not be able to cover the entire region of interest. So as a consequence, we just go to a T, which is this time like surface, and then we move in this direction. So this will correspond to U bar. So optical function U bar in M int. So M int is whatever is to the left of T. Mx is whatever is to the right of T, right? So T is some kind of time like surface, which is used in order to distinguish between a region where we have to go this way and the region where we go this way. Okay, so we have an outgoing geodesic variation from here and an incoming geodesic variation from here. And we define null frames. You can define null frames here based on these null geodesics. And if I have a light cone, I have the null geodesics, and let's call it say E4. And then I can define an affine parameter, which is E4 of S is equal to E4 of S is equal to 1. So E4 of S is equal to 1 will give you a foliation of all these light cones. And therefore, we'll have a null frame here and the same thing here. We'll have a null frame here, right? So this is a way to define the gammas in the exterior region. Gamma in the interior region. Okay, so that's enough about this. Okay, now in reality, the data are characteristic Yeah, okay, so this is something that I would explain in a second why we are allowed to do that. The reason is that if I start with a space like hypersurface, sigma zero, then I know from a result with Nicolo that if I go sufficiently far towards infinities, right? So this corresponds to y naught. If I go sufficiently far towards y naught, then the data here becomes sufficiently small so I can construct a piece of spacetime all the way to a null cone. And therefore, this part, we can assume that it's already understood, right? And then this part here, from here, also it's sort of a finite region, where again I can, instead of construct, instead of looking here, starting with a space hypersurface, I can look here, right? So therefore, I can assume that my data is given on all hypersurfaces. Okay, so the data again is here and here. And now the important point now is that the spacetime is constructed by a bootstrap procedure, right? I don't know yet that I can reach infinity. So I have to keep enlarging my spacetime until I reach infinity. And this is done by a bootstrap. So somehow you have to think about the fact that at any given time, the spacetime under consideration is only this spacetime, right? So I'm only going up to finite C bar star and the finite C star, right? And, but then I have, in addition, I have a spacetime hypersurface, which is, we call sigma star, which is this one. And the initialization, instead of being done here, because I have not yet reached the scribe, I'm going to do it from here. In other words, I construct the u foliation this way and the u bar foliation this way. So the spacetime is constructed like this. And then I'm going to enlarge the spacetime. I'll show that if I, if I reach a certain stage, in reality, because of my a priori estimates, I can go a little bit further. So this way, I go all the way to infinity. So the idea is by bootstrap, the idea of the bootstrap is that you make certain assumptions and then you show that actually you can do much better. And therefore, there is no reason to stop here. You can go further. That's, that's how it works. I mean, that's how it works also in the stability of Nikolsky space. Okay, so right. So the key features of the construction is, first of all, the Hawking mass plays a fundamental role for defining the final mass. So this is a well-known concept in general activity. You define it by using trace k and trace k bar. So remember that we have all these quantities k, k bar, eta, theta, eta bar, and so on and so forth. And trace k, trace k bar are just simply the traces of k and k bar. Of course, we, we are now, the situation in which we are now is, is one in which we have a foliation by two surfaces. So all these quantities can be easily defined. And, and then the Hawking mass is, is obtained by taking this concept mh divided by r. So r, I should say, on any two surface, you define r to be the area radius. In other words, 4 pi r squared is equal to area of the corresponding two surface at that point, at that particular point. So at every point you have, you have also an r, and therefore this is defined this way. So an integral, and of course, this is the integral on the corresponding surface. So I have a surface here, take the corresponding surface, I take the integral, this defines the Hawking mass. By the way, yaw is supposed to have defined improved version of the Hawking mass. Has it been useful in mathematics or not yet, or do you think it is not useful? I don't think it's useful. I mean, I will explain to you why it's not useful. I, I don't think it's useful. But don't tell it to him, because you'll get very upset. Ah, sorry. Okay, so let me say it again. Yeah, so it's, it's somehow when you, you have to tie, you have to tie it to real constructions. Otherwise, it's too abstract, too general. So in that sense, it's interesting, but it's hard to imagine at this stage how it would be useful. All right? So in any case, it's not useful here. So, then, if, so in infinity, you can then define once you, you see, once you go all the way to infinity to scry, in other words, you can define the final m to be just the limit of mh. So you get, you get an, you get on any u, you get an mh here and you take the limit in this direction as, as u goes to infinity and then you get the final mass. So the final mass is what you get here. But of course, after you have constructed the whole thing. But this can be defined, the beautiful thing about the Hawking mass is that it can define it locally. You can show that it has estimates. You can get very nice equations for it, which are quadratic on the right hand side. So they are very robust. And the limits, the fact that the limit exists once you have constructed space, of course, you have to construct the whole spacetime to do that. But once you construct the spacetime, you immediately identify the Hawking mass. So the Hawking mass, sorry, the, the, the final mass, which is this one by taking the limit. So again, you take the limit as r goes to infinity and then you take the limit as u goes to infinity, you get the final mass. Okay. Now, here is the, the most important part is, sorry, the most important part is how to construct u, right? Because now I have to be more specific. Before I said that constructed from infinity, but in reality, you construct from sigma star. So I have to make choices on sigma star to initialize u. In fact, actually, I even have to construct sigma star as it turns out, right? The space I have a surface. And I want to do it. Okay. And, and this is the concept that we introduce that the space like boundary is foliated by what we call GCMS sphere. So these are generally covariant modulated spheres. Right. So I don't know if you like, if you don't like the name, please tell me, it's because we can still change. So generally covariant modulated spheres. Paravast spheres. We can use. So modulation makes sense, right? Generally, covariant makes sense. So the two together make sense. Okay. So what does it mean? So what does it mean? It means that you use a full degrees of freedom of the covariant group of deformorphies in order to fix spheres in which certain, certain quantities, certain key quantities associated to the, to these things pick up specific values, right? Like zero, for example. I would like to make certain things equal to zero. And, you know, the reason is very simple because you want to go, you see, you go, you go in this direction, right? In order to construct the, I mean, to get estimates for the richer coefficients everywhere in the space time, right? But, you know, if I, if I start badly here, there is no way I can derive anything. So I have to initialize. I have to find good initializations on Sigma star or good initializations or scry in some sense, right? But now you have to be very specific. What are these good initializations? So this is what we call GCMS spheres. And let me, I'll say a few more words about this. Okay. So here is what this is. So this requires more of an explanation. You could just call them, you know, nice spheres or good spheres, people of use? Good spheres. Yeah. But this is more impressive, I think. GCMS. No? Well, okay. We'll discuss it at lunch. But, okay. So you see, yeah. So you have, you have a two sphere and then the corresponding light gone that starts from it, right? And I want to, I want to arrange these things. Certain key quantities are zero. So for example, on any two surface, two surface, there are certain operators, which are called, which are Hodge operators. So Hodge operators, which are elliptic, elliptic operators, which are naturally, which become naturally in the equations. So in the, if you write down the actual equations in terms of, by the way, this is, you, you don't quite see it in the in the Neumann pen rolls. I mean, it's, it's much better to use a geometric approach to see the, the character of this equation. But anyway, you can see it in any, in the end, you can see it everywhere. They are the heads of the. Yeah. Yeah. Chacovariant things on the two. Right. But unless, unless, unless they are integral, unless they see as an integral, they are not, you're not going to be able to use them. Here you are integral. So these are, to see as an integral. Okay. Anyway, so the, the operators are, are defined like this. So there is a D1, D2, D1 star and D2 star. So D1, D1 takes a, a one form. So it takes one forms into scalars. And D2 takes two forms. Well, symmetric, sorry. Symmetric, traceless. Because this is what comes up in, when you write down the Einstein equations in the Neumann pen rolls, we'll probably use more, so if you want, if you write down the equations, you, you, you get symmetric stress as two tensors. And D2, so D2 will take, say, a tensor like this, psi AB, which is symmetric traceless, and, and takes covariant derivative DB. Okay. So psi goes into this. Right. So it, it takes, in other words, a two, two tensor into one tensor. Right. So this is one, one forms. So one forms are not transverse. They do not satisfy divergence equals zero. They are just one forms. Yeah. Just one forms. Yeah. Absolutely. Yeah. Okay. This is precisely what the, yeah. Right. It's just that they are not integrable, so they, you cannot use it, you cannot use it as the elliptic system because they are not integrable. Right. Usually, I mean, in, in situations, in the, in our, in our case, yeah. So it, in other words, what, what you are saying is that I could use those definitions. I could use, so they correspond exactly to those operators. Yeah. This I agree. Yeah. Absolutely. This I agree. Yeah. Right. It's just that the, the way we do it, have a more geometric description, but anyway, this is kind of irrelevant. So once, once you have D1, D2, you can take the, the, the duals, D1 star and D2 star. Right. So these are, they will go from scalars to one form. So D1 star goes to scale at one form and D2 star go to, from one form to two forms. Right. Okay. So the, the operator D1, D2, D1 star, D2 star. So this you can say that they are coercive on spheres. So coercive and these are not. So these have the kernel, non-trivial kernels, kernels. Right. Right. When this plays a very important role in this analysis. Okay. So anyway, what we want is that you take the, you take trace chi on this sphere. Right. So, so I have on this here, I take this trace chi S, right, which corresponds to the trace chi of this and, and take D1 star of it and then D2 star of it. Okay. So, so I can take for example, D2 star of trace chi bar is equal to zero. I can take D1 star of trace chi is equal to zero. And then I can take also D2 star, D1 star of that mu. Again, mu depends on S. So it's equal to zero. So what is mu? Mu is a quantity which I'm not going to write down because I don't think there's any point, but it's sort of the mass-aspect function. I'm sure that you know what it is. Right. So it's a mass-aspect function which is defined using, so it's a combination of rho and so on and so forth. Okay. So some connection coefficients, but let me not, let me not be very precise here. But in any way, it's something at the level of the curvature. Okay. So the only thing that you need to, I can't quite explain why you take these ones, but it's extremely important to make some conditions. And you see you take essentially something which is at the level of three. You have three, three such conditions which corresponds to the number of degrees of freedom of the transformations that I wrote down before which goes from one frame to another frame. Right. So again, I mean this has to do with this fact that a priori no frame, I don't have any way of choosing a particular frame. And it's right here that you make a choice. You make the choice by using the frame transformations. You make the choice in such a way that those three things are satisfied. This leads to a huge Hodge system to involve a lot of, I mean it's a very coupled system which will be a system for FF bar and lambda. So in other words, let me call it like this. You'll get some equation like this d of FF bar and lambda. And you show that it's coercive. The most important thing is to show that it's coercive. To show that it's coercive, you also have to take into account also something about these kernels because the kernels of d one star and d two star are non-trivial. So you have to really mod out the kernels and so on. So there is a lot of work that needs to be fixed. But the idea is that you use the full, use a full number of degrees of freedom of your local gauge transformations in order to construct such CS. But at the end, the quantities here are scalars. So are these scalars at the end constant? I mean uniform on your sphere or not? They have a variation, such... Yeah, so for example, trace chi s, it will be in fact 2 over rs. I didn't write it down, but there is also... So yeah, so this one is constant. But of course the other one can have a kernel, right? So they are fixed. But in order to fix them completely, you need something else, which I didn't... Yes, yeah, so they are totally... Yeah, they are completely unique. They are uniquely defined in the end, right? Everything will be uniquely defined in the end for any of such. Okay, so then of course you have to construct, you have to show that such things exist because see the way... Okay, so let me say something about the bootstrap. So the way the bootstrap works... Actually, let's look at the picture. The way the bootstrap works is that you assume that you start with initial data, you go... You use local existence, so you can always go a little bit and then you keep going until you reach a maximum. You cannot go any further, right? In principle, at some point you might stop because there are singularities and so on and so on. If you have instabilities, you cannot go forever. So you assume that you stop somewhere, right? But then on the Sigma star where I stop, I redesign... Sorry, I assume that the space time is such that on Sigma star I have this GCMS, so I have this condition satisfied on Sigma star. And as a consequence, I can get very good estimates, and this is of course a huge long step. I can get good estimates for all the connection coefficients and all the curvature in this region. And then because these estimates are good enough, I can extend the space time a little bit so I can go slightly further. I can use the eufoliation to go a little bit further and a little bit further in this direction and a little bit further in this direction. So in other words, I construct a slightly bigger space time. But of course, as I construct this slightly bigger space time, it's not at all clear that the new boundary Sigma star verifies this GCMS conditions, right? Because I do an extension coming from here, there's no reason why this should be satisfied. So what I have to do in this region that I have constructed, where I have extended the previous space time, I have to show that there exists a new Sigma star, which consists of these GCMS conditions. So that's where you have to actually do most of the work to show that these things can be found, right? Okay, I'll come back to this in a second. But for the moment, this is maybe the most important, I mean clearly the most important part of the construction. Okay, so these GCMS are constructing, as I said, based on solving a large elliptic hold system. And also a couple of those transport equations as I should explain later on. Okay, now, this is this fact that together with the knowledge of alpha alpha bar, so again alpha alpha bar, remember alpha alpha bar in principle are determined by from that P that I mentioned earlier, which comes from the Chandrasekhar transformation, and which itself does not depend much on the gauge condition, right? So I can imagine that at least in principle, this alpha alpha bar are determined. And then together with this GCMS condition, I can show that all other connection and curvature components are controlled, by control, I mean control with specific decay rates for each component of curvature and connection. Okay, so this is, anyway, so this is... So just to understand this GCMS field again, are you saying here you show that there exists a frame, in the frame of freedom of the range to have the trace of carion satisfying this? Yes. Although we have also to construct how the sphere is located in space. Yeah, so there are two parts in the construction. So first of all, so again, I assume that I already have extended the previous space time, the previous space space time consisted of some sigma star here, which had GCMS, but once I extend it, I don't have them anymore, right? So now I have to construct a new one. So in this region, I have a lot of control on the extended, so let me call it gamma extension and curvature extension. So I extend the gammas from before in this region and R in this region. So I have a lot of control here, right? But what I don't have is GCMS. So what I show is that at every... And I have coordinates. Yes, it's essential. Yeah, it's essential that I also have coordinates. So in other words, I have also coordinates here. Yeah. You have to change the... Yeah, so I'm going to change the coordinates in such a way that it will be like that. Yes. But... So this I should also call, right? So once I have this, I also have coordinates. It's not such a big deal. But now I use everything that I have in order to construct a new GCMS. So how do I do that? I take a sphere of the old foliation, of the foliation which has been extended, right? So this will be a sphere like that. I take its south pole and I construct a new sphere, which is a GCMS. And this is... No, no, no, no, no. The whole sphere has to be constructed, of course. Yeah. Right. Exactly. No. Otherwise, the frame will be just a linear theory. But nonlinearly, I have to construct the whole sphere. So you construct the whole sphere and then you also construct a sigma star which consists of GCMS. And this is your new boundary. And then you go this way and you show that this can continue forever. Okay. So the space time m, space like hypersurface sigma star and the two geodesic foliations are constructed by a continuity argument which I already mentioned, starting with the initial data layer, right? So that's what I said. The initial data layer was constructed in a joint work with Nicolo in 2001, 2003. And then you derive sufficient decay for gamma r, in other words for the connection coefficients and curvature coefficients. And then you close back to the main wave equation for p. So the whole point about this equation is that now I have something on the right-hand side, exactly the same way as what we discussed last time. I had to solve this system of equation by taking Lie derivatives with various vector fields. So when I commute, I'm going to get d of Lie xr and delta of Lie xr is something which is very complicated on the right-hand side because it's something which depends on the deformation tensor of x and also curvature. So and this of course could kill you because these are some kind of system of wave equations it's a Maxwell type system, but with the right-hand side and the right-hand side could be terrible. If you don't have enough decay, if I don't have enough decay for the right-hand side, I will not be able to close. The same thing happens here. If I go back to this Chandrasekhar equation, right? So actually it's a regular type equation. Because I'm in non-linear theory, this term here which is quadratic can be still very extremely complicated. And if I don't have enough information about gamma and r, I will not be able to close. And therefore somehow the essential point of the entire construction is that I have to derive sufficient information, sufficient decay information about gamma and r so that this error term here does not create any problem for estimating p. And of course the estimates for p are connected back to estimates for gamma and r and so on and so forth. Anyway this is a usual kind of bootstrap. All right now let me mention at least some of the main statements in the theorem. So this is a little bit more precise. So you start with initial conditions in the boundary layer. Some norm. I'm not going to specify the norm here because it's a little, I mean there's no point. These norms are relatively complicated because they involve power so far. But different components have different powers so far. So I start with initial data which is less than some epsilon zero. Epsilon zero has to be sufficiently small in order to prove the stability. The conclusion is that there exists a future globally hyperbolic development whose complete future non-infinity i plus in the future horizon which verifies. Okay so now I want to say something about norms in space time. All right so there are various types of norms. Again I'm not going to make them precise because they are too technical. But you could just it's maybe useful to remember that there will be norms in which I have point wise decay for various quantities. In fact these quantities will come here in a second. So this norm measure decay in various quantity and k small refers to the fact that you take only a small number of derivatives. So actually it's important here you have to take quite a lot of derivatives and I'm going to distinguish between small and k large. But this k large is much bigger than k small. So decay is only I only need decay very precise rates of decay for a small number of derivative. Small can still be about 50 number of derivative. I don't I mean the number of derivative here is not I don't have to be very precise about how many I take except to say that there are still a finite number at the end of the day. Anyway so this this is a quantity which which measures decay of my various quantities and this is a quantity which measures for large number of derivative measures the energy and that does not have decay it has only power so far. So this is a norm which has only power so far no decay in it in terms of weights and the whole thing has to be less than okay so you see k small is a half k large plus one. In particular so this is now I'm saying something about these norms in particular this norm tell you that the curvature alpha and beta for example the highest components decay like one over r cubed times u plus two r to the one half and this is a small delta and either like this or like that so here you have more decay in you and anyway this is somehow the rate of decay with respect to both r and u right so in particular if I'm on u equal constant the decay is just r to the seven half this is exactly consistent to the stability of mykowski space that's exactly how we had in stability of the mykowski space we had exactly r to the seven half for this component alpha and also beta and then there are the component beta bar which in the case only like one over r squared component alpha bar which is the component that goes old this is the radiative component so this is the one you see in lego right the only one you see in lego is this one over r and then there are components of the of the Ricci coefficients the kappa hat and zeta and so on and so forth you you they all have very precise rates of decay it's extremely important to be very precise exactly because of the reason I mentioned I mentioned here that you have to control this term at the end of the day and the delta decay is strictly positive and this is a strictly positive number which you choose it which is small which is small you can also take it larger actually but we didn't we didn't yeah you can impose it because they were recently I had discussions about this thing that the decay in general of alpha bar will be one over u not use all the r without the delta no no I'm speaking as a function of u ah yeah after multiplying by r so you get you're just you you're one over u for large u and this is very important in four dimensions but it's not larger not faster than one over u for alpha bar and maybe also chi bar for alpha bar so here is it something you impose that there is one plus delta because physically the tail effects impose that it cannot go faster than one of you you mean for for for what for the two body problem or for yeah for solutions link okay but which have quadruple moments at some stage so I wonder whether you impose this as a choice of faster decay or anyway it's consistent to do that yeah it is it is it is consistent to do that but but it would be interesting to well it has something to do is with your initial conditions your initial conditions are such that you can also get that but but I'm curious about what you say so maybe we should discuss yeah okay so in any case this is what you what you have in interior everything decays like u bar to the one plus delta c so you but remember it's it's a it's the optical function correspond to the interior so this decays they all have uniform rates of decay which is normally in the interior m infinity is as we said is defined like this and you get that m infinity is close to m zero so in other words you don't get too far away from the original mass on the future horizon you can get an asymptotic of the future horizon r is equal to two m infinity plus something which behaves like this in a mixed row row is not remember that row is obviously not small it has to have a correction this is a this is a schwarzschild value of row so you have to take it away so you take row plus this is less than these quantities and so on and so forth so there are all sorts of very precise in fact you have to be you have no choice when you do stability in general activity you have no choice you have to be very very precise with all components you have to get the correct decay both in r and in u with a lot of precision okay this is now the coordinates you asked me about the coordinates so this is how the coordinates look like so you can construct coordinates such that the final the the final metric has this form with m infinity here and in the interior it has this form you have the bondy mass law formula the bondy mass of course is the limit of m ur as that goes to infinity but so this is the standard thing that you get final bondy mass is exactly the m infinity which we already discussed okay so now you know that any polish people say that it should be called the bondy trout man or trout man bondy fortunately christian is not here so he would have complained i'm sure yeah i i you so do you do you do that i mean you call it bondy trout man there was a paper of trout man before bonding that's true okay doing the same thing ah okay then it should be called not essentially i mean he was okay but then it should be called bondy but it should then then it should be called bondy trout man okay good so i'll then i'll i'll i'll change okay so uh so this is uh formula okay bondy mass okay so now the main thing intermediate steps so the theorem number one okay so you have to start so it's a long unfortunately the construction takes a lot of space to do it uh but conceptually it's not too difficult i mean once you understand what's going on it's not too difficult to describe so in a first approximation you start with initial data right so i have initial data which is less than epsilon zero and i look at this equation which was the chanda sega equation right the which is i call it q frack here but it was before called p right so p and q frack are the same thing so i call it q frack because in in in their polarized case this p which is actually a two tensor reduces to something simple okay so so you show that the solutions of this equation verify this norm which this is a norm which involves everything including decay the decay rates for q frack is less than epsilon zero so again this type of norms i i don't going to write it specifically but it's something that has to do with this kind of this kind of behavior right which i i think i i don't have to say much more right now and here you have suddenly plus 20 derivatives and right and you have because you have to do the bootstrap okay so sorry i didn't mention the bootstrap so you you you make a bootstrap assumption about about so the bootstrap assumption let me write it here so the bootstrap assumption when you lose derivatives that you need to start with many right so so i make a bootstrap assumption with a so you see you you have a epsilon zero which corresponds to initial data right so this is something that i can make small right and then i have a bootstrap assumption which is an epsilon which is going to be larger than this typically right uh and also i have uh i have k small and k large so the bootstrap assumption has to do is k small and k large right so for example these decay norms are only for k small right okay so now however when i uh and then there is another one here so these these are decay norms and these are energy norms so energy type norms right which are bounded in terms of uh this parameter epsilon right so what i have to do now you see when i when i look at this equation it's an equation with error terms on the right hand side so in principle uh these error terms will look like epsilon square times some decay rates which are very important in order to close to to be able to estimate this q in other words to get this type of estimate so so you get automatically epsilon zero because epsilon square is i can always make it to be strictly less than epsilon zero and therefore somehow i beat i beat the the bootstrap constant but at the same time i lose a certain number of derivatives you know so it's uh sorry i i gain excuse me so i gain derivatives because originally i had the i had the bootstrap assumption for k small and now i get actually k small plus 20 the reason i want to do that is because in the process i keep losing the derivative so i want to at the end of the day i want to get back to exactly k small right in other words i want to beat the bootstrap assumption i want to show that the bootstrap assumption is not only verified it's it's you make a you made an assumption but at the end you get something even better and the norms are reasonably weighted so but yes they are weighted yeah they're weighted in r weighted in r squares of derivatives correct yeah exactly and uh okay so so you see that's why i you want to get a little bit more in k small because you are going to keep losing so this is in the first approximation what you show is that q frack is has a good estimate in terms of epsilon zero which is your your good parameter because this is this is what you control in terms of the initial data so this you all right now theorem number the the next two theorems is to show that once i have q frack i also have alpha alpha bar so this is what i mentioned earlier that that alpha goes into that p and if i know p i can go back and get estimates for alpha and i lose i lose a certain number of derivatives when i do that yeah because right and but nevertheless i'm still larger than k small plus 15 and then i go okay so this is now the hard part because here i have to use uh the gcms constructions to get from uh from these estimates for alpha alpha bar to get the estimates for all Ricci and curvature components right and to show that these are still losing epsilon zero you lose another 10 derivatives but in the end you get k small plus five is less than epsilon zero while the while the bootstrap assumption had k small less than epsilon so you obviously have improved the bootstrap assumption at this stage and then then you have to do something about about this other norm so this is the norm that is that involves the energy okay i didn't say much about this and i'm not going to say but you have to do something more okay and then finally you have to extend the spacetime so you see up to now all these theorems concern uh this spacetime which i call the bootstrap spacetime the one that ends in sigma star and uh the sigma star consists in the gcms in in in the this type of conditions which i i said are extremely important so with those conditions i'm able to derive all these estimates which are improved estimates they are better than the bootstrap and now since they are better than the bootstrap it means i can go further right so the that's these are the the theorem seven and eight which says that well first of all you define u in r plus to be the set of value of u star such that an admissible spacetime exists for u up to u star verifying ba admissible spacetime is a spacetime that satisfies all the assumption bootstrap assumption plus the fact the sigma star consists in gcms right that's exactly what i mean by admissible admissible is a spacetime which ends in sigma star which consists of gcms and which verifies all the bootstrap assumptions so uh so so then i look at the maximum value of u star for which such a sink exists which verifies these things i use uh and in theorem seven because of the previous six theorems and because i have improved the bootstrap assumption i can show that there exists a delta zero which allows me to go a little bit further and then once i go a little bit further i show that in fact i can go for all time because otherwise i reach a contradiction so that's basically the idea all right so here is a construction of gcms let me go very fast over this because i think this is in a sense conceptually this is the most interesting new part so as i said uh assume that you have a metric in your spacetime which looks like this and assume that you have control on these coefficients i have control on this control on this control on this right so that's very important that i have control the control comes from the fact that when i extend my spacetime right uh after in theorem seven when i do the extension i do the extension and i show that i still control all the uh metric coefficients and all the richer coefficients okay so then uh i look at all the possible frame transformations and now i have to be more careful i have to look at the full set of transformation actually i put here lower the terms but even these lower the terms are important in fact so a joint transformation looks like this so it was given f f bar and a which are all of epsilon i get the general transformations of this type all right so now here is what i want to do i i want to i start with an s0 right which is a surface corresponding to this variation so there is a variation by u and s so for every u and s fixed there is a specific surface so i start with that surface and i want to make a deformation of it that goes from a zero to s right and here i'm going to use polarization because the construction in general we haven't done it i mean here we actually have used polarization so polarization means that every deformation can be described in terms of function u and function s which depend on theta on the parameter theta so they don't depend on phi in other words phi because of the polarization so so then i have to construct u and s and the frame here so i so i have to what i have to do is to find a frame f f bar and a corresponding to s and the capital u and capital s such that my conditions the condition i want the gcms conditions are verified on s okay so proposition given a zero okay here is actually a slightly different version of this gcms but it doesn't really matter the idea is exactly the same so here i assume that i have an s0 close to a small value of r so r0 is like 2m0 plus 1 plus delta h and and i'm going to go from the frame e3, e4 and e theta i go to a new frame which is a frame adopted right so the important thing is that that i have an s0 i have a deformation but on this deformation i i want to have a frame which is e3 s e4 s e theta s right while originally i had e3 e4 e theta so at every point on this deformation i have the old frame and i'm i'm trying to go from the old frame to a new frame but this frame should be adopted to s in other words the e theta should be tangent to s so this this two should be transversal and e theta should be tangent so i have to i have to make sure that that my construction picks into account this and also i want to have some gcms condition again it doesn't quite matter which conditions you choose the important thing is that i have three conditions i have here kappa s is 2 over rs and these ones are 0 for example right and then i define this adaptive null transformation which means that that the psi takes the original e theta into e theta of s which is tangent to my surface this leads to so this is a compatibility condition that i have to write down so i'm not going to to write it down here because these equations are rather complicated but in any case you everything can be expressed in terms of equations which tie u and s to f of bar and a right so there is a system of equations that tie u and s to af and f bar which are kind of transport type equations right and then the equations on af and f bar so these are the main things these equations in addition are tied to the gcms conditions and the gcms conditions give you an elliptic system in other words i should write it here so you have so schematically things look like this you have an s0 you have a you have this d form surface f you have u and s and here you have f f bar and a which solve an elliptic system f f bar a it's an elliptic system let's say a complicated elliptic system on s right so this at every point on s these are defined and this corresponds to the transformation that goes from so at every point here right so this is a two surface at every point i have the whole frame and the new frame so the new frame is given by this is obtained from the whole frame by this so i the gcms condition become just this and in addition there are equations that that are of the type say u prime is connected with with f and f bar by some complicated equation and the same thing as prime is connected by this by a complicated equation so these are some kind of transport equations so it's a system what i have to solve i have to find us and af and f bar which verify a coupled system between transport equations which relate u and s to f and f bar and this elliptic system right so this is what you have to do so this leads to an iteration where you you see you have at every point you have to iterate like this at every point you have u n s n a n f n and f bar n starting with a trivial q zero q the trivial q zero is just a trivial deformation of s zero to s zero and and then u n s n defines a map so see this is what's complicated because for every iterate you define a map from a zero to a surface s n right so this is some deformation and on it i define now this elliptic system which is this one right so on s n i define a n plus one f n plus one f bar n plus one to the to verify this elliptic system where there is some kind of a hodge system corresponding to this surface and then i construct a new pair so so this is a way to define the new at every point in my iteration i define a given that this is already known i define a n plus one f n plus one f bar n plus one by solving this type of hodge system and once i have these things i find u n plus one s n plus one by solving a transport equation so of course what's the difficulty is that is that you the sphere here a type n is different from the sphere at time n plus one right so i have i'm going to have s zero i have an s n here and i have an s n plus one here and i somehow i have to compare these two surfaces and of course the only way to compare them is to take the pull back to a zero and compare the the corresponding metrics on s zero and so on and so forth so it's a complicated procedure but but uh it's conceptually pretty clear what you have to do and so this is a contractual argument and so on and so forth so i think this is probably a good a good place to stop and is it important that you are close to two m because at some points they appear the condition that you were close to yeah no so so this is a simplified version which is close to two m but in in what i described actually it happens far away so as far as the power so far will be important and so on and so forth yeah right it's true yeah but you know i wonder i mean this cannot be too different for some of the things that you have to do right in terms of the center of mass frame i mean the the fact that the center so basically we're taking into account that the center of mass frame changes right with uh with dynamics right where is the condition that you are mass centered that you have no dipole well it's exactly it's it's exactly this gcms right this gcms conditions are exactly not right that's exactly where you you say that you are centered because essentially some quantities are uniform without the dipole component that will be a placement of the center of mass just to understand right exactly yeah exactly but right i mean obviously you have to in your calculations you have to do that too of course at every point in the right when and we'll be we'll be going to talk about to see to see because in general when you have a system which is radiating with your source with bodies the thing is going to recoil i mean you emit gravitational waves and there are more gravitational waves momentum emitted in one direction so at the end your source is moving in one direction and the gravitational waves compensates for the distance so we do not keep the center of mass fixed in the gauge of course yeah yeah which is but here you keep it you adapt your oh we don't we are not in a center of mass frame for the for the source if you want to describe the central part of the space but here maybe you don't write yet linear momentum because of the polarization property yes yeah right right yes that would be that's going to be more difficult but it would be nice to talk to you about this