 Good. Thanks a lot. So let me remind you where we stopped. So we are trying to solve the Cauchy problem. So we want to construct a space time using hyperbolic equations. And for this, we can use Cauchy data on some surface and try to solve them. And what I told you is that if you impose some condition of the wave operator acting on coordinates, and so, for example, 0, that it could be more complicated, but say 0 just for simplicity, then Einstein equations become a set which is still very ugly. I don't want to write it by hand or solve it by hand. But at least you have good existence theorems. So to solve this now, you need g alpha beta at t equals 0. And it's time derivative. You're going to tell me what are the coordinates, what are the coordinates already here, right? Because by doing this, I'm using a time function. So I've already introduced the notion of time. t would be, say, maybe x0 here or something like that. And then you get these equations. To solve them, you prescribe any metric coefficients, any time derivatives. You solve them. And we stopped by saying, well, there must be a catch because we know that these equations are not hyperbolic. So there must be a catch. And the catch is the constraint equations, right? And thank you, Thomas, for catching a mistake here. So if you want to get the other side, so in other words, you solve this problem and you want to say, actually, I have a solution. Then to get this direction, you need the constraint equations. And these are equations which are supposed to hold at the initial data surface. And the origin of the constraint equations is easy to understand if you think about the Bianchi identities. So there's something called Bianchi identities which most of you know very well. But one corollary of Bianchi identities is that d mu, g mu nu equals 0, no matter what metric you take. So obviously, if you solve the equation g mu nu equals 0, then this is going to be there. But the point of this is that whether this metric satisfies any equations or not, this will be true. So this is true for all metrics, whether they satisfy vacuum equations, field equations, anything. Any metric satisfies this identically period, right? It's good. And so that this implies that g 0 nu cannot contain second time derivatives of the metric. Second time derivatives of the metric. And this, no matter what coordinates you take. And it's easy to see. How does it go? Well, you write this equation. Let's see this is going to be our equation 3 today. So 3, you'll get d 0, g 0 nu plus di, where these are space derivatives now. Plus, well, you get a bunch of Christophels, right? Plus gamma times g is 0. No matter what metric there is. Good. So suppose, now let's see what kind of derivatives you have here. We have here at most two derivatives of the metric. Because, well, g, this Einstein tensor is built out of the Ricci tensor, right? Only two derivatives of the metric. Gamma has only one derivative, so this expression has at most two derivatives of the metric. How many this tank can have, well, at most two time derivatives of the metric? And this produces only a space derivative, right? So here, this term can have maybe zero time derivative, or maybe one, or maybe two. But certainly not three, because this one has at most two, and this is a space one, right? So if this had two time derivatives, well, then this one would get three, right? Then because there is an explicit one here, then g0, g0, nu has three. Well, I mean, if you just think abstractly for a field equation, there's nothing strange about this. But now, this is an identity no matter what metric you take, right? So no matter how crazy the metric is, this cancels by algebra, right? So if you had three time derivatives of the metric here, they would need to cancel by algebra with three derivatives here, but they aren't, OK? So this cannot contain second time derivatives. And these are the constrained equations. And again, so if we go back to this, this does feel weird, because this equation has second time derivative for every indices, right? No matter with mu nu. But this one doesn't. And of course, there is some miraculous interplay between these two equations, which makes things compatible. In any case, these are the mysterious constrained equations. And explicitly, they look like that. So if I had lectures on the Cauchy problem, it would take me about two and a half hours to derive these equations properly. And I don't have time to do this. So let me just write to them what they are. So R, let me say that Gij from now is the spacetime of the metric, the space part of the metric. So forget about the spacetime metric, or I'm going to write explicitly. But G is this now of the spacetime metric. So the Ritchie scalar of this metric, right? So this metric, and this is the Ritchie scalar, is equal to 16 pi G is 1, C is 1, T mu nu, a mysterious vector and mu and nu, plus a mysterious tensor k. I take its norm metric G minus trace k with respect to the metric G squared. So this is the norm of a tensor using this Riemannian metric now. This is a space surface, so this would be a Riemannian metric. So explicitly, this would be Gij, Gkl, k. If you're a mathematician, that should be clear. If you're a physicist, maybe you're not used to this notation, so let me just write this. So this is just use the metric to calculate the norm of a tensor. And this is the trace, so this is just Gij, Kij. And K is a symmetric tensor. Well, OK, so Kij is symmetric tensor. It's called extrinsic curvature. And you should think of this as dT Gij, OK? So Gij in dimension, yes? Yes, exactly. Gij, if you want to think about the spacetime, so let me just write the spacetime metric like that. G is equal G0 0 dt square plus 2 G0 i dt dx i plus this thing, right? Gij dx i dx j. So this is now G, and this is going to be this calligraphic G for spacetime so that we don't get confused. I'm probably getting, we'll be confused at some stage anyway, but good. So think of the Cauchy problem. You need the fields and their first time derivatives. So the field is only at this level the space part of the metric, or that's the thing which enters in the constrained equation. The constrained equation, only the space part of the metric enters. And the time derivative of the metric is encoded in this tensor. And I'm going to write down formula shortly. But you really should think of this as this is some geometric version of the time derivative of this metric tensor. So the time derivative of that part of the metric. And these are the things which enter into the constrained equations, OK? Good, so this is the scalar constrained equation. And good, and I didn't tell you what this N is. Well, if you want to think in terms of spacetime metric, this is supposed to be a space like surface in your spacetime. If it's a space like surface, it has a field of normal vectors. So this is, and this is this unit normal. So if you want to think of this as an index, you need normal sigma. So this is this vector field here. So roughly speaking, you can think of this as the t00 component of the energy momentum tensor. That's what it is, almost. But geometrically, that's the right thing to think about. You have the field of normals here and this field contracted with energy momentum tensor is the local energy density on the surface. This is the, of the matter fields, local energy density of matter on sigma. Good, so what is this K guy? The simplest way of thinking about it is wrong, but is the simplest one. If you take K is the strange one-half lead derivative of the metric in the normal direction. So this is the clearest way of thinking that this is a time direction. The time direction is a normal one and K is the time derivative, but not any stupid time derivative, but in a geometric direction. So the reason why this is a very stupid way, well, two things. First, it might be stupid, but for calculations is actually the simplest formula. If you really need to calculate this, somebody gives you an explicit metric. Take this formula. Don't forget about any other formulas you have seen with Christophels and stuff. Just take this one. It's a stupid equation because lead derivative in a normal direction wants to have a field which is defined away from sigma. You want to differentiate something in a normal direction. That's what the linear derivative does. Then you need to think to be defined away from sigma. But we don't have a space time yet. We're solving a Cauchy problem. We just have t equals 0. So therefore, from this point of view, just Cauchy data, it's a stupid formula. But from a practical one, that's the one. And, well, so it's certainly something like d i and j plus d j and i. So just a normal formula for the lead derivative. You just take this normal vector and differentiate. It's funny because it doesn't have a feeling of time derivative if you do this. But the proper mathematical way would be as follows that k of two vectors, k of x, y, is the scalar product. So now with the space time metric of the normal with the derivative of x with y. I think that, and I'm always worried about signs here, whether I have the correct sign here or not. I can always get away with this by saying, well, the normal, who says that it has to be pointing in this direction. It could be pointing this way. But I think I need a minus here. So that's the geometric way. Again, if you had a space time, that's how we would do this. So whatever it is, that's the formula. And from the point of view of the Cauchy problem, you forget where it's coming from. You just have another tensor field. And this tensor field then is used as part of your Cauchy data to determine the derivative of the metric in this direction. So if you remember the formula for the lead derivative, it should be something like that. 1 half n mu d mu g ij minus, it's a plus, because in this is up and down, n mu i g. So the full space time metric, right? g mu j plus n mu over j g mu i. So that's the... This formula makes it clear that k ij has information about the derivatives of the metric in a direction which is pointing away from your surface. So that's how you can use these things to jungle to get initial data. In any case, this is my scalar constrained equation, which probably deserves a number, which would be number four. And number five would be the... Yes, here. This comma means this. Okay? That's what you meant? Right. So if you wanted to write it like that, this is... Okay, I didn't want to do this, but let me do this. Okay, so this is the same as minus x of g of ny plus... Let's see. Or minus g of dxn. So do I have the signs right now? It's complicated because of this minus, right? If you use the chain rule on this, you differentiate the scalar product, then you'll get a term x... Well, x acting... Covarian derivative of the metric, which is zero. Then you get a term which is... Covarian derivative of n acting on x, which cancels with this one. And you get a term where you have the covariant derivative of y, which is this one. So this is just an identity, but this is zero. Because x and y are tangent, okay? So that's this business here. I should have said this before, right? So x, y, tangent, sigma. Then the scalar product of the normal and the tangent is zero. So derivative of zero is zero, and that's... And therefore, I think I get my formula out there where you have a derivative of the normal, right? That's what you wanted. So you can juggle with these equations and transform this in this form, or this form, or this form. That's the reason I wrote this one because this is the only reasonable one from the point of view of the intrinsic geometry because you're differentiating things only tangentially to the surface, okay? While here you just have to differentiate in directions which are not defined. Good, in any case, there is a scalar constrained equation, but there is also a equation, the vector constrained equation, which is the divergence. And this is another one of these equations where I'm never sure about the signs. So minus trace k, delta ij. And I believe now the sign is minus, and please correct me if I'm wrong. And again, I can always get away by saying the thing, taking the other normal, but I think just to be coherent. 8 pi gc1, so t, mu i and mu. So energy momentum tensor, contract once with the normal. You get a one index, one space index three, and this should be a j, and this is my vector constrained equation. So, and big D is the covariant derivative of this metric, g, right? So I didn't write it delta because there's a space time, but good, very good. Covariant derivative, the space metric. So this is the main difference between just normal wave equation and general relativity. The initial data for a normal wave equation, box five equals zero, two arbitrary functions. Here, the initial data are a space metric, two covariant tensor, and in vacuum, the divergence of this combination of k and its trace is zero, or if you have matter fields, you'll have a source term, and the Ritchie scalar of this induced metric is, well, if you're on vacuum, that is this, and it's an important part which I forgot, which is plus two lambda if there is a cosmological constant. So the cosmological constant, this equation doesn't see the cosmological constant, but this equation, and there's a good reason where I forgot lambda because now I want to set lambda equals zero, and we're going to talk about the ADM mass. So the ADM mass is, the way I'm going to present it is actually tied very naturally with this equation. So now we can just forget the Cauchy problem. There'll be evolution theorems next week by D'Affermas by considering initial data, right? So evolution who knows what happens, but we're not interested in this stage in initial data. And now this argument was just to tell you that the equations G zero mu and mu cannot be the evolution equations because the evolution equations had two time derivatives. So these equations have a different character and they restrict the freedom of what data you can choose. You can solve the evolution equations no matter what G and you take, but you will not get a solution of the Einstein equations. Unless these things are satisfied. Right, so let's make a four hour lecture tonight at eight and I will show you how to derive that, right? Yeah, there's a short, well, if you have heard about Gauss-Kadadzim and R.D. Embedding equations, this is nothing else but a version of these. But I don't have time to talk about this, right? Say it again. So junction equations, okay, I'm not sure. Maybe that's something, I don't know. Yes, please, maybe. So the one way to think about these constraint equations is to think of them as like the, what he's calling the scalar constant, constraint equation as the full projection of the Einstein equations onto the normal, so it's the double projection of the Einstein equations into NN, which you can see from the first term on the right-hand side right there. So if you actually project the full equations, then you get g mu nu and mu nu, and that g mu nu and mu nu, when you do the three plus one decomposition, you have to show that it's equal to this thing through the Gauss equation and the Gauss-Kadadzim equation and other equations. The other equation, what I would call the momentum constraint, which is what he's calling the vector constraint, comes from a projection of the Einstein equations onto the normal, and the other index is projected tangent to the hyper surface. So it's like a mixed N dot H type projection. And the evolution equations, which he has not written, is the projection of the Einstein equations completely onto the hyper surface. That's good, right? And there's a plus lambda, right? So if this is supposed to be equal to this, then certainly when you contract twice with the normal, it should be equal to, right? So it's an obvious necessary condition. And what I was trying to explain to you that this necessary condition does not have a dynamical character because it does not have second derivatives of the fields in terms of this data. One has to be careful with this statement because you can always say, well, my initial data was something else, and then this becomes second derivative. So from the point of view of what I said so far, then this is... And similarly here, right? So this is equivalent to minus g mu nu plus lambda... Well, g mu j. Well, this is the Einstein tensor times the normal. This is exactly what you... And maybe that's clear and maybe that I should just have said this and not say anything more. But in any case, so the lesson of this is, and we're going to start a new section, that Einstein equations are terrible because not only they are terrible for the evolution point of view, but they satisfy pretty terrible equations. They impose pretty terrible equations in the initial data, right? So you can't take any initial data. You have this equation which, written like that is maybe not that terrible, but it's still rather nasty, right? An equation which you have scaled across something. If you take a metric, write it down, component by component, it's going to take you a page if you... You mentioned three. And you have this equation which isn't that bad. So divergence of the tensor is maybe zero in vacuum, but still... That's linear, but it's linear, but it's still coupled with the other one because the metric enters here, right? So the metric enters here, so these equations are... are coupled. Good. What does all this have to do with the mass? Because after all, that's what I'm supposed to talk about. Well, that's my next section, which is a... next paragraph. And quick and dirty road to the ADM mass. Good. So what would be the... an elegant way for the... to get the ADM mass while deriving it from some first principles? First principles may be variational calculus, variational identities, and maybe just maybe I'll do this in my next lecture, depending upon how I'm going... doing this time, this takes a lot of time, and there is a quick and dirty road to get the ADM mass by analogy to the Newtonian mass. And this is coming from the following observation. Well, how was it... in the Newtonian case, the equation box phi... actually Laplace phi is equal to minus 4 pi mu or rho, because rho and mu are the same. And to get this, I assume that fields are slowly moving and everything is very near... very near Minkowski spacetime. Well, here if I assume that the fields are slowly moving, remember that K is essentially time derivatives of the metric. So if the fields are slowly moving, this is gone. Lambda, in the asymptotically flat case, is zero. So we left in equation 3G, scalar of the metric is proportional to the matter field, right? So this is... this looks like the same equation or modulus of pi and stuff like that. So 16 pi. So here, to get the mass we integrated by parts or we integrated over the spacetime and got a boundary term, so maybe we can do something like that here, integrate this equation over space and get a boundary term out of this, right? So question, can we get a boundary integral? Well, let's say this is equation 5, now with 6, is it? 5, 6, from 6. And the answer is yes. So I'm not going into the calculation because it's... well, it would eat 10 minutes of the lecture. It's not very... interesting. Let me just sketch how you do the calculation. Well, this is essentially the same... same sketch as before. So R is GijRij. Is GijGklRkikl where this is the... what did I do? Not too much, right? So this is the one way of writing this. And now what is Riemann? Well, Riemann is d gamma plus gamma squared. And here, we were integrating the second derivatives by over spacetime to get a boundary term, which had only first derivatives. So this term will certainly not have any second derivatives. So let's not look at the details of this, but let's look at this one. Well, gamma is something like G minus 1 dG. If you write it down, it is a one-half of three terms. So... and... three times, right? Three terms. And actually, minus another one coming from the anti-symmetrization. So if you do the derivatives, you're going to get second derivatives of G acting here, six terms. Good. And then... so you do this calculation. So six terms containing second derivatives of G plus terms which involved plus something which involves plus no second derivatives. No second derivatives. Well, some of them cancel out. You have to do the algebra. And the end result is that for some reasons which I'm going to explain right away, I'm going to multiply this by the determinant of G. And the formula is this. So it's dK, dIJ, square root of the G. Now the metric is Romanian, so I don't need to put absolute values. GIJ, GKL, and dI G LJ minus dL GIJ. Plus Q of G dG. And this thing is quadratic in the derivatives of G. So that's... there's really nothing more to it than what I told you, right? So you get six derivative terms of dG and you work it out and you collect only the second derivatives. Everything which doesn't have the derivatives you put here. And you don't care what it is. The only thing to know is that this is going to be quadratic in the derivatives of the metric. It's useful to know later. And this is this formula. And the reason I've put this determinant G is because I want to integrate this. And if I integrate in... on a manifold, you need a determinant G to get an invariant expression. So then integral of that G R is this boundary term. So integral of dQ U K, right? So this UK, this thing here, by definition, this is my UK, right? So... K. And plus and you integrate with respect to the coordinates or any dimensions, in fact. Plus this integral of Q. This integral is... if you integrate by parts, an integral over the sphere at infinity. So the mass is actually 1 over 16 pi. The 1 over 16 pi is actually coming from this. Because here R is supposed to be 16 pi times rho, right? So rho, the energy density, you have to divide by 16 pi. And this part is, by definition, the mass. So integral over the sphere at infinity of Ui dSI. So let me repeat, right? So this is, by definition, Lim as R goes to infinity integral over spheres of radius R. And this is, say, the radial component, right? So Ui times the normal times. So this is the ADM mass. This is not the most familiar formula in the literature for this mass. Yes? You forget it. That's why it's a quick and dirty road. But this thing, I'm going to come to it in three minutes when I'm going to ask you, is this well-defined? Is this finite? And then we'll need this piece. But one way, because now I'm going to turn this equation around and say the mass is actually equal to integral of this minus that. So it is here. This is exactly the formula for the ADM mass that you... I'll move this otherwise. You'll find in the literature, which you can simplify it a little bit if you assume that Gij is a metric delta ij plus correction term. And let me write this in this form. For the physicist in the room, let me remind you that f is o for r minus alpha means that there exists a constant c so that f is smaller than cr to minus alpha. That's what I mean here. And if you assume then that the derivative of the metric, well, delta ij is the chronicle delta. These are zeros. So this is going to be the derivative of the zero terms. And asymptotic flatness is defined by requiring that the derivatives of fields fall off one order faster than the fields themselves. So that would be o of r minus alpha and minus one. So maybe an example, which is t equals zero in Minkowski, then, well, so you have a choice. Either you choose stupid coordinates and calculate these derivatives and it's not a trivial question, so we're going to come back to the question whether which coordinates are stupid which aren't. But you could just take stupid coordinates, calculate this, or you can just take obvious coordinates in Minkowski's space time where the metric is just a bunch of constants where it is all zero and the mass is zero. So then in this coordinate, the mass is zero and you're done. Now I'm not going to do the calculation in Schwarzschild, but if you take Schwarzschild, t equals zero in Schwarzschild, then you get the mass parameter out of this. M here, so M from equation seven is the M parameter in this Schwarzschild metric. You can check this. There's one calculation we can do just for fun is to see what we get for, yes, here. So the metric, you choose coordinates so that the metric is the Euclidean metric, flat metric, so delta symbol, one, zero, zero, one on the diagonal, right? And the term which decays is for large r, right? For large r, that's what I want because I want to take this limit at infinity. So this error term should decay as some power of r, right? Maybe one, maybe 25. Actually in higher dimension it's going to be Schwarzschild in higher dimensions would be n minus two, okay? So alpha is equal n minus two Schwarzschild in higher dimensions. In dimension three n is space dimension three n minus two, one. So you get one over r fall off of the Schwarzschild metric right from this. So Schwarzschild metric can be written as the Euclidean metric plus terms which decay like one over r, okay? That's what I meant. And the derivatives should decay one over the first. This is the definition of asymptotic flatness and definitions you don't ask. You're welcome to ask. But yeah, it's the definition, right? So we say the metric is something flat if this is satisfying. Well, so this was maybe meant as an example for this stage, but for one, you have this integral. Does it exist, this limit? So if the... I'm integrating over a sphere of radius r. So the area element has an r square in it. So for this to converge, you get an r square. Well, this should go to zero. Certainly, if it doesn't, then you're in trouble, right? Maybe like one over r squared, right? But then this should go to zero. It's one over r squared, maybe. But then, well, the g's goes to constant. So the derivatives should go to zero, right? So that's how it works. So you can make a consistent definition with this kind of asymptotics. Q is quadratic in g. And the derivatives of the metric follow up like r minus alpha minus one. So Q being quadratic will follow up as faster, right? That's twice this amount. And this is going to be important in the next statement that we're going to do about convergence of these integrals. But before I go to this, this is not quite the standard form of the ADM mass. It looks a little heavy. And if you think about this, what is important from this formula, the determinant of g goes to one. The geometry goes to deltas. And this goes to zero. So the reasonable thing is just to say, well, I replace this thing by the leading order behavior and keep those. And that's my expression, right? And that's what people do when writing the standard formula for ADM mass, which is you replace this by a delta ij and this by delta kl, right? And this by one. So the ADM mass is therefore m is equal one over 16 pi integral. And what's going to happen here? Well, let me write the formula d i g ij minus d j g i i times d si. So completely non-covariant, completely outrageous if you think about geometry. We started with a nice object like the scalar curvature. We get this dirty partial derivatives integral, which a priori doesn't make sense, doesn't need to converge, doesn't have to have any invariance properties. And I told you this is a quick and dirty road to it, but you can do the whole Lagrange and Formalis, variational formula and other currents, whatever, and you'll get there. You'll get there by a very long road, which for me doesn't tell you much more, but has some content from the physics point of view, so it's worth doing. But you will end up this place with a lot of matchwork. I'll completely wrong. So, you know, I do this all the time in my lectures and I've asked my students, well, from Vienna to watch out and point out to me where I do things like that, and they're obviously sleeping because they were supposed to tell me what you told me now. So where are they? They probably just left. And it's summed, right? It's not up, it's not down, right? So the summation convention in an outrageous form where you just sum period, don't ask, right? Right, exactly. So this is the formula. So let's just check on our post-Newtonian metric. And if you remember, the space part of the metric was just a conformal factor. 2 phi times delta ij. So I need to calculate GII. And let's see, suppose we are in dimension 3, just to, well, Newton's physics is usually dimension 3. So if I just take the trace here, I'm going to get a 3 from this. So 3, 1 plus 2 phi. DI, GIJ. So of course, DJ is producing, DJ, GIJ is x, DJ of the conformal factor. DI, GIJ. Now it's DJ, DI, GIJ, DI, GIJ. Well, the delta ij's and the 1 give no contribution. So it's DI 2 phi delta ij, which is 2 DJ phi. And because the delta ij will just put the j here, so we get that M is equal 1 over 16 phi. From this, you get twice DJ phi. From this one, we get minus 6. Wow, it's kind of to work. DSi. So 2 minus 6 is really minus 4. And 1 over 16 phi will produce minus 1 over 4 phi integral DJ phi. Yes, I, and this is exactly the formula we got in the Newtonian approximation. It's included here. DSj means the normal times the surface. Or if we are in higher dimensions, n minus 2 area and n minus 1 area, right? Yes, and then this n gets a conflict of notation with this n. So that's the best way to write it. Good. So, you know, it was quick and dirty, but we get something we recover, something which gives us the same thing as we got in post-Newtonian approximation and has a chance to be valid more generally. And there are two questions which we can ask here. I need a number for this equation. Yeah, but 8 shouldn't come before 7. So this would be 6, 1 or something like that, 6a, or actually star. Good. This is the star equation. And let me forget the color constraint. I can always write it if I need it. So the question, when is it finite? So, well, I'll write you the answer fact. If there's a weird number, 1 half in dimension 3, I write you the result in any dimension later. And integral from r d mu over your surface sigma is finite. And this, rather than writing this integral like that, I'm going to say that r is an L1, OK? So r is an L1 means that it's a little more than this, but for our purposes, that's this alpha. So alpha out of half and this is finite, then the mass is finite. The limit exists, right? So the limit exists and is finite. Well, which limit? This sphere, this integral over this infinity is defined by a limiting process, right? So this is a limit. Limit exists and is finite. And the proof is very easy. I turn the equation star around, right? So turn star around. I already mentioned this. Turn star around. So the mass is this, well, up to 1 of 16 pi. So it's 1 over 16 pi integral over that gr. Well, so this is r d mu g. So this is, I didn't tell you what d mu is, but d mu g is the geometric measure, right? So this is finite square root of g times d3x and minus integral of q. So now r is an L1. So this means that this is finite. So certainly this exists. And what about this term? And this is the term which will give us the one-half. So, well, q is quadratic, right? So this is quadratic in dg. And dg behaves like r minus alpha minus 1. So I have an integral from dg square, roughly, over r3, or at least for large distances, because this is only important for r. So let's say r larger than some r, dg square. So d3x. So from this I get r minus 2 alpha minus 2. And from this I get a three-dimension in r square, r square dr and the angles, right? So d theta and so forth. But this is the important part because times angular measure, right? This integrating of the angles gives me a constant. I don't know about this, but the important integral is the integral of r. And I'm integrating from r to infinity, r minus 2 alpha, dr. And this converges only if and only if alpha is larger than one-half, right? So it's finite. If I have one-half, I'm going to get r2 minus 1, which is a log, integral of log, right? Obviously I get infinity. Well, obviously if I get even slower than r minus 1, then it's going to blow up. But if I have larger than one-half, well, you can calculate this integral yourself and see that alpha is larger than one-half. This converges, right? So good. So, maybe you think that you need something like Schwarzschild for convergence of this integral. Schwarzschild, it goes like what? The metric approaches the flat one like 1 over r. Derivatives go like 1 over r square. And so from this, you get 1 over r square and r square from the area. So this sounds like the right behavior. But you need less, right? Somehow this is miracle that you need less. You need alpha larger than one-half. Good. Now, the next key fact is that this thing is actually well-defined. So before I show you that this is well-defined, let me show you that it is not well-defined by a nice example which is due to Russians. And I'm not going to write the names because I've told you I'm not going to do details, so you just kind of find this in the nodes. And so there is an example which gives us follows. You take a flat space. You take g. It's a dr square plus r square d theta square plus sine square theta d phi square. If I ever have to write something like that, this thing I'm called d omega square. So if you ever see d omega square again, this is this thing. And you introduce a new coordinate r. You replace by a rho plus... Make sure that I have the right numbers. Not that it matters, but might as well have the right formula. So you replace r by a new radial coordinate plus c times rho to one minus alpha. In other words, if you factor out rho, it's one plus c rho to minus alpha. So you have a new radius. To leading order r behaves like rho. So I'm just making a change of coordinates. r behaves like rho for long distances. This is lower order. So alpha is positive. And this is the same alpha that we'll find in the metric because if you... So spare me the calculation, but in this new coordinate, this is a flat metric. Alpha satisfies the condition with the alpha. And now you calculate the integral and you're going to get m is equal to infinity, where alpha is smaller than one-half. So the limit still exists, but it's infinite. If you like infinite limits, maybe you can say that this doesn't exist. You get c square over 8, funny number, if alpha is one-half. You get 0 if alpha is larger than one-half. So I insist. This is Minkowski. This is a flat Euclidean metric. And you just choose some stupid coordinates. These are the stupid coordinates. You replace r by this. If this exponent here is larger than one-half, alpha larger than one-half is the one we love. Because this metric goes to a flat one with an exponent larger than one-half. And we know that at this rate, the mass suddenly exists and it's suddenly finite. You get 0. So good. Fine. I'm not excited. So we had 0 to start with anyway. But if you take alpha equal one-half, you're going to get a metric which is still asymptotically flat. So this metric tends to the Minkowski, to the Euclidean metric error distances, at the rate one-half. And the mass can be any number. So you can generate any mass by a change of coordinates. So maybe this is going to explain a little better your question about why do I assume some fall-off rate. And actually, in fact, not any fall-off rate, but larger than one-half. Because this integral, which is written up there, doesn't make geometric sense. So the fall-off rate is one-half or smaller. You're going to get infinite mass for Euclidean metric doesn't make sense. You get any number. Mind you, it's a positive number. So if you think about positive theorem, well, maybe that's actually okay. In any case, you get that if you allow a rate one-half or smaller, this definition doesn't make any sense. So it's geometrically and then also physically, if there's supposed to be any physics in it. So the good news is that this one-half rate is actually very good. And there is a fact which says alpha larger than... Well, let me rate you the number in any dimensions now. N minus 2 over 2, right? So in dimension N, the same... If you repeated this calculation with this N minus 2 here, but you are in dimension N, you'll get finiteness in the same way. So alpha larger than minus 2 are in L1. So which means that the integral over is finite. And then the mass is independent of the coordinate system. It's amused to calculate it. But we've seen this class, right? We've seen the class of coordinate system. Satisfying, so now we are in equation 9, probably. Satisfying 9. So it looks stupid, but as long as you stick to reasonable coordinate systems, you get something which is an invariant of your geometry. Good. So do I want to tell you anything about the proof here? Maybe two words. Let me see how I'm going this time. How much time do I have left? Ten minutes about here. I don't want to start a new... Let me tell you a few elements of the proof, right? Ideas of the proof. So let's say we have a coordinate X and the metric, let me write it Gij1 for the components of the metric in this coordinate system, is Gij plus O of X are 1 minus... minus alpha. And I'm not going to write the derivative conditions by... Now we have another system of coordinates, which are Yi. And let's write G2ij for the metric components in this coordinate. So we have delta ij plus O, but now it's O of Y. So we have, in principle, a decay rate in the coordinate X, a decay rate in the coordinate Y, but it's not clear how these two are related to each other. Good. But we know that these metrics are deferred by coordinate transformations, right? So this is the same metric, the same metric in two coordinate systems. If I want it to be completely complete, I would need to justify that X going to infinity corresponds to Y going to infinity. I'm not going to do this, but this is kind of obvious, right? So what would happen is that, of course, X goes to infinity corresponds to Y to going to minus 25. Maybe, right? So you'd have to exclude these cases. But so this I'm going to assume. Good. So we have the same metric in two coordinate system. So this means that if I change coordinates, I have d2 ij, dYi over dxk, dYj over dxl. This is the metric in the new coordinate system, right? So this is the transformation formula and accept that this should be KL. Good. So I look at these equations. So these equations are telling me the way I'm going to interpret this, that this equation is telling me things about how these derivatives decay. These derivatives cannot be too big. And the way to see this is just, if we multiply this by, so this equation, let's see, so this equation 10 or something like that, times KL. Then I'm going to get something which looks like that. So d2 KL, d2 ij, dYi over dxk, dYj over dxi is d2 KL, d1 KL. Looks like a funny thing to do because you're multiplying coordinates of the same metric but into different coordinate system. And one point here is that, well, this goes to its delta plus something small, right? Of x minus alpha. This is delta plus something small, right? To minus alpha. So whatever these corrections are, this is a bounded function. Well, for large axis, but that's the only thing we are interested in, right? So the g's are euclidometric plus something. So the g2's, the d1 are euclidometric plus something. So it's a finite sum. You're going to get some bounded function here. Now this, this is essentially the same as the norm of the tensor dy over dx squared. Or what I mean is that this expression, maybe we call this diamond, then diamond is smaller than a constant times the norm of this and is larger than one over another constant, so maybe a small constant times the norm of this. And this is just freshman algebra. You have a quadratic positive definite form on the space of matrices. The space of matrices is finite dimensional. All norms in finite dimensions are equivalent. So this norm is equivalent to the Euclidean norm, which is just dy over dx squared and of course dy over dx squared is just the sum over ij dyi over dxj squared. So just from the equality, then one metric is the transform of the other one. You get that the Jacobi matrix, the matrix of derivatives of the coordinate transformation is bounded. Good, but now that's essentially the end of the story because once you have the derivatives are bounded, then you calculate the norm of y. So the norm of the vector y, you can just calculate this as dy over dx dx. So dx i, let's see. Let me do it carefully. So if I just calculate the norm of the vector y, then I can write it as a d over dr dy over dr of y integrated in r from r to r plus a constant plus yx equal r. Well, so you just have to calculate this derivative here, but it's going to be smaller than... Well, let me do this. How do I differentiate this? So this is integral r to r dyi over dx ix. That's the rate of derivative over r and times yj. That's how we differentiate the norm. It's probably over y, something like that. Yeah, just an algebra, dr. So this is bounded. This is bounded. So you get that this is smaller than plus this, right? At x equal r. So c times r minus r, r being the norm of x, plus this constant here. In other words, the y-coordinate cannot grow faster than just a multiple of the r-coordinate. You use this argument in the other direction and you get the reverse inequality. So use the same argument for x of y and then you get that x is smaller than c times y minus some other plus another constant, c1 plus c2. So now you know that the radii behave in a comparable way and the final thing you have to use is to use a transformation law of the Christophels which you're going to write upside down. So here's a formula that gamma 1, so I'm running out of time so I'm just going to write the formula formally. I think that this is something like, let's see, gamma 1 is in the x-coordinate so it's probably dx over dy, gamma 2dy over dx, dy over dx plus d2y over dx squared. So here's something like that and I'll let you put the indices where you need. So the Christophels don't have a piece which transforms like a tensor and this in homogeneous part and you use this actually to get information about these second derivatives. Now this behaves like O of x to 1 minus alpha. That's my hypothesis on asymptotics. This is O of y1 minus alpha but this is the same as by the estimates here x1 minus alpha. So from this equation you get that d2 and these things are all bounded. So you get that d2 over y over dx squared is O of x1 minus alpha. So then you can just enter minus 1 minus 6. So you get good control of second derivatives of the coordinate transformation. Then you have to put all these in the formulas and you're going to get the result. So the proof is just to control the way the asymptotic flatness implies strong conditions on how the coordinate functions relate to each other and by a completely elementary integration and algebra argument. So once you have this, once you have a good control of how the coordinate changes look like for large distances, then you can put it here and show that this is invariant and I think it's probably a good moment to stop, is it? That's probably right. Okay, thank you.