 Okay, so as you remember from the, we're continuing where we left off in the morning. In the morning, we'd motivated starting with this metric. This metric is eta mu nu plus o mu o nu by rho to the power d minus three. What's a rho? It's an arbitrary function of x and t. We'll find some restrictions in it as we proceed. But at the moment, it's an arbitrary function. And then o is n minus u. n is the normal to rho, it's d rho normalized. u, what's u, it's an arbitrary function such that u dot n is equal to zero. So far, it's the only restriction, okay? And in the morning, we'd understood that rho equals one was the event horizon of the spacetime. We were not interested in solving the equations for rho less than one. Rho greater than one, when rho was more than one by d larger than one, the spacetime rapidly reduced to flat space, exponentially in d, okay? So the spacetime solved the Einstein equations there. What remained was this region of thickness one by d around the surface rho equals one, okay? That's the region we're now going to consider. Okay, so what we're gonna do is the following. Consider this region, and so let's say if we've got some point at rho equals one. Maybe I'll draw this, yeah, I'll choose this point, rho equals one. And then I look at a little box. I look at a little box of spacetime around that region. That's that that box has size one by d. In this box, the metric is varying fast in the normal direction because of the rho to the power d minus three. And we've seen that the length scale of variation of the function one over rho to the power d minus three is one by d, even though rho varies on length scale one, okay? For instance, we had the approximation that one over rho to the power d minus three was approximately one, was approximately e to the power minus d rho minus one. So you can see that as rho varies over length scale, you know, over length scales one rho will change over order one by d. So this quantity will change to order one, is this clear? So just like for the Schwarzschild metric, this space, this rho, generically here, this metric varies over length scale one by d in the direction normal to this rho equals one surface, okay? So some components of the metric are varying fast, namely on length scale one by d. So to make that clear what we're gonna do, so that there's another way of saying we focus on this box, is we choose some coordinate x naught mu which sits at rho equals one, okay? And then we use a new coordinate alpha mu which is x minus x mu minus x naught mu divided by d. So that when x minus x naught mu is order, sorry, time's up, okay? So that when x minus x naught mu is of order one over d, alpha mu is of order one, okay? So moving to this new coordinate blows up a region of size one by d into our whole universe, okay? All of our space as alpha becomes very large, we're still exploring distances of order one by d in the original coordinate. Okay, now all metrics in the original coordinates, the metric function was of order one. You know the value of the metric at any given point was order one, or less. But let's say at the near, in this one by d region of order one. However, if we change to the coordinate alpha, something that is of the form g mu nu dx mu dx nu, okay? If we take this and substitute for x mu in terms of alpha, that substitution will have dx mu is alpha mu by d, solving for x mu, okay? Which simply reflects the fact that if you rewrite the metric in terms of alpha, coordinate distances of order one are physical distances of order one by d. That's inconvenient for our analysis. So in order to continue the analysis, what we do is use the fact that Einstein's equations in a vacuum have a scale symmetry. If the metric g solves the equations, then a constant rescaling times g also solves the equations. This follows from very simple considerations. For instance, r mu nu as many of you know, r mu nu scales like g times g inverse. So it's actually invariant under the scaling. Even if it had scaled by some constant amount, r mu nu equals zero would have been left invariant under the scaling. Yes, it's an obvious fact. Just follows from the fact that r mu nu scales homogeneously under overall scaling of g. In particular, the scaling is a simple one, but that doesn't matter. So I'll define a new metric, which is g mu nu. Let me be a little more consistent. Let me use mn for spacetime indices. Is d squared times gmn. So gmn is simply the same metric as before, but expressed in terms of the coordinates alpha. And then I explicitly undo the fact that the distances are so small by rescaling up the metric. So now let's look at this gmn here. In this gmn, we've got one direction of variation. That is the direction orthogonal to the normal. I mean, in the direction of the normal, in which things are varying on length scale one. Because this row function was varying on length scale one by d, then we stretched out the coordinates so that one by d becomes one. In that one direction, things are varying on length scale one. In every other direction, we have variation because, for instance, u mu is a function of x, or n is a function of x. These functions of x are not raised to the power d. So something that was varying on length scale one in the x coordinate varies on length scale d in the alpha coordinate. So in every other direction, okay? In every other direction, variation is on length scale d, okay? So we've got one particular direction, namely the direction of this row, okay? In which things vary on length scale one. So everywhere else, the variations on length scale d, in all other directions, you might suspect, okay? You might suspect that in order to get this guy to be a solution of Einstein's equations at leading order, all we have to do is to adjust the variation in this normal direction of a row, the variation of row in the normal direction correctly, okay? This suspicion is a bit wrong, as I will explain to you in the next five minutes, but it's a reasonable first suspicion to me, to have. Now, how do I get the variation right? Firstly, actually, as you see from this approximation here, all that matters in this little blown up patch is the magnitude of the gradient of row, you see? Because row minus one, you can expand it in Taylor series expansion. You get gradient plus gradient squared and so on, okay? In the size one over d, the first term is proportional to row minus one times gradient. This deviation here is of order one by d that cancels this d. But the second term will be suppressed by one over d. So all that matters is the value of the norm of this row function, okay? So in order to see whether, to order to get Einstein's equation satisfied here, we clearly can't leave row completely arbitrary. We must have some condition that determines its norm on the surface row equals one, okay? I'm not gonna tell you what that condition is. I'm not gonna tell you what that condition is and how we get it. But in order to do that, there is one more thing I would like to, I would like to, it's a technical aside, but it's an important one that we'll go through for the next five or 10 minutes. And this technical aside goes as follows. It's not just techniques actually, physical as well. It goes as follows. You see, what we're doing here is potentially a bit confusing because d in this problem has two roles. The first is that it enters the actual metric like with this d by three, d minus three. The second is it's the dimensionality of spacetime. So there are two possible sources of d when you act with differential operators on a function, okay? One of them is because there's an explicit d in your formula, but second is because the differential operator can act on a function that varies in d different directions, can pick up unexpected powers of d, okay? So to do the large d counting properly, we have to be very clear about what kind of solutions we're looking at, okay? And to get the large d problem counting properly, I will restrict my attention to the following class of solutions. Consider solutions that in d dimensions maintain an SO d minus p minus two isometric, okay? Where p is some number, any number you want, one, two, five, 10, two million, okay? But that is held fixed as d is taken to infinity, okay? What is the basic idea behind this? You see, suppose we're in d dimensions as we go up in a number of dimensions, the number of possible solutions you can consider also increases for two reasons. Firstly, just a scalar function has many more directions in which to wiggle. Secondly, we're dealing with a metric function. The metric function has many more indices, okay? So it's usually the case that a large d limit, okay? Doesn't exist on a large anything limit, doesn't unambiguously exist unless you focus on observables that themselves are independent of that object. For instance, in a large n Yang-Mills theory, the large n limit is most clean when viewed in terms of trace variables, okay? A trace operator is a universal kind of observable that exists in a universal way independent of n, okay? That allows for the formulation of a clean large n limit, okay? So in this problem, whenever I'm confused, I will always, whenever I'm confused about what the order in d counting of certain quantity is. For instance, if I take del squared of this function rho and I wanna know, does that scale like one or like d? Okay, or like d squared or whatever, I will check that by evaluating it on configurations that obey this condition. This condition will play a very little role in what we're doing, okay? It just allows us to cleanly count the d dependence of various quantities. In the end, all we need to assume is those d dependences, but to motivate those d dependences, we make this assumption. To be, what I'm saying will become clearer in the next two or three minutes. So suppose we wanna do this. Suppose we wanna maintain this SOd minus p minus two symmetry. How would we proceed? Well, we've got our coordinates x ns and let's break them up into two batches. The first guy is which I'll call xas and the next bunch which I'll call zes. There are two plus two of these, d minus p plus two of these. You might ask why I put this two here. That's for some historical reasons. You don't need to get into that. Okay, for some reason. I break up my coordinates of flat space into two bunches. The xas and the zes. The SOd minus p plus two of some isometry that I will maintain is the isometry of rotations in the zed directions. You can ask how do I choose these zes? Do I boost and choose these zes? All those questions, the answer is it doesn't matter. All I'm gonna use this for is to estimate how things scale with d and none of those choices affect the rest. You choose it however you want. Okay, so we've got these p plus two dimensions and d minus p plus two dimensions. And this assumption of isometry with this choice of what this isometry means is the same thing as the following. Let me take za, za. By the way, time lies in this xa space. Let me call this s square. It's just a radial coordinate in this space under which I'm maintaining rotational invariance. So the assumption of isometry invariance tells you that rho is not a function of all the coordinates independently, but it's only a function of xa and of s. Similarly, it tells you that um, okay? Well, that's a more interesting constraint for um. Firstly, each component will only be a function of xa and s, but also this constraint will be on your components. Okay, so um is ua, which are functions of xa and s, but also you could have a radial component in this direction. Okay, the radial component can be written like this, s avis times, let's call this us, which again is a function only of xa and s. So if I make this assumption of isometry, okay, I've got, it follows that rho and u are of this form. Is this clear? Any questions or comments about this? Okay, so now making this assumption, I'm going to compute a few quantities that will play an important role in what we do as we proceed. The first quantity is the extrinsic curvature, okay? In order to compute the extrinsic curvature, I first need the normal, extrinsic curvature to what? So what is the extrinsic curvature? If you've not seen this before, consider a core dimension one manifold in a larger space time. The extrinsic curvature is the symmetric derivative of the normal or the derivative of the normal, it's automatically symmetric turns out, projected orthogonal to the normal, okay? Del A and B projected orthogonal to the normal, okay? So in order to compute this extrinsic curvature, what I need is first the formula for the normal, but that's very easy because the normal, so let's first write down what N A is. N A is simply the gradient of this guy, so that's del, okay, let's write it as a one form. N A dx A is del A rho dx A plus del S rho dS, but S is square root of zA zA, so that is simply zA dzA by S, just using the chain rule. So this is what the normal one form is. Now, what I wanna do is to take dN, not anti-symmetric, so just take the, I wanna take dA N B. I really should do this with covariant derivative, but since I'm in flat space, covariant derivative of same as, and using Cartesian coordinates, covariant derivative of same as ordinary derivative. So you see, now what do I get? Two kinds of terms. There are the terms where the derivatives will act on the derivatives, okay? But this derivative will act here or this derivative will act here. Oh, by the way, there's also a, sorry, it's not gonna be important, but anyway, to be completely del A rho squared plus del S rho squared, there's a normalization. It's not gonna be very important for this point I'm gonna make. It's gonna be important for detailed calculations, okay? So the derivatives can act, so okay. Now, okay, let me quickly make the point I wanna make. You see, this guy here, that I suppose you were doing a problem where there were P plus three Cartesian coordinates. The Cartesian coordinates were X's and S. Suppose we've got this auxiliary space, some sort of base space of some vibration, okay? Where you've got P plus three Cartesian coordinates and you had a normal vector, which was del A rho, del S rho, pi del A rho, the whole thing squared plus del S rho, the whole thing squared. Okay, in such a space, you would have an extrinsic curvature just by differentiating these guys. What's different in this space compared to that? What's different is that this one-form DS is not quite constant. So its derivatives are not quite zero. If you compute the extrinsic curvature from this normal vector, you'll get exactly what you would have got with such a guy, with such a normal vector, pretending that it was living in an auxiliary P plus three dimensional Cartesian space whose coordinates were X, A, and S. Such this auxiliary space I will denote with coordinates X mu. X mu is equal to X, A, and S, okay? So what you'll get is K mu nu. In addition, you will get some extra components. The extra components are the components that you get by differentiating this one-form here. Is this clear? What do we get by differentiating the one-form? Well, this divided by this is NS. The S component of this normal. And then you differentiate this one-form. You can either differentiate on the top or the bottom. If you differentiate on the top, you will get delta A B by S. If you differentiate at the bottom, you will get minus Z A Z B by S cube, okay? So this we can take out as follows NS by S. So what have we concluded? What we've concluded is the following. The extrinsic curvature of this membrane has two parts. There's a first part which just lies in the non-symmetry P plus three directions. That's this K mu nu. But there's a second part that has components on the angle of the D minus P minus three dimensional sphere under which you have an isometry. That's what this is telling you. This, if you wrote in angular coordinates, would just be proportional to the metric on the sphere. Okay? So apart from the extrinsic curvature and the variation directions you see, there is this part that is almost trivial, but it's there. Now this is very important for the following reason. Suppose you compute the trace of the extrinsic curvature, K which is equal to KAA, okay? You will find trace K mu nu, which is something of order one because it's derivatives of functions that vary on the scales of order one. But class you will find this object, what we get from here. So you get plus NS by S times D minus P minus P plus three. If you take the trace of this, there's a one here, maybe a minus thing. There's a one here, sorry. You get a factor of D minus P plus two from here. So plus thing. And minus one here. So that's this. What is the important point of this whole exercise? The important point of the exercise is that the trace of the extrinsic curvature is of order D. Extrinsic curvature components are all of order one, but the trace is of order D. This is a theme that we will see again and again and again in this problem when indices are contracted, okay? Because of our assumption of isometry that will guarantee that we get, at least generically, contracted indices contribute like D. Okay? There's another exercise one can do. One can compute any function, B of X mu, that is a function of X, A, and S, and check what del square of B is, okay? Once again, you'll find that this is order D. And another thing you can check is that del dot U, okay, the divergence of U, in fact turns out to be, will in fact turn out to be of order D times US by S, plus some leading connection than one by D, okay? So the divergence of U is order D, del square is order D, K is order D. You see the rule working again and again and again, del square has contracted indices because the two derivative indices contract that gives you order D. In fact, if you work out what the del square is, actually there's very easy way of doing this, just move to polar coordinates in these directions and use the formula for for instance del square, that you know square root G, G mu nu, you'll get that very fast. You get this very fast. You will find that this del square B is D times del S B by S plus higher order, okay? So sorry for the length of this digression, but it's just a technical point, but one that's very important to understand, we are going to work assuming that all contracted indices give us orders of D. What context is this true? At least if you maintain the isometry I'm talking about, it's always true, okay? So now I have one more little exercise to undergo for you and then I can make the claim, the proper claim properly. This one additional exercise goes as follows. Notice that the row function for the Schwarzschild black hole obeyed an equation. It obeyed the equation del squared of one by row to the power D minus three is equal to zero. D minus three is equal to zero. What is this equation? This equation is simply the statement of Newton's law, okay? That the term that comes in the blackening factor of the metric is simply the Newtonian potential. For instance, four dimensions of one over R, D dimensions of one over R to the power D minus three and one over R to the power D minus three is special because at least away from R equals zero, it's a harmonic function, right? This is just a statement of Newton's law. This was a Newtonian potential that away from R equals zero, we're always away from R equals zero in the neighborhood of row equals one, okay? Del squared of one over R to the power D minus three is zero. Now, what I am going to do is to restrict my row function such that it obeys that equation on the surface, I'm going to restrict my row function such that it obeys this equation on the surface row equals one, okay? But this, I'm going to show you that this that to leading order in D determines that this to leading order in D determines the modulus of the gradient of row in the normal direction. I mean, the modulus of the gradient of row because the gradient of row by definition is in the normal direction, okay? Let me, I'm not even sure I should do this exercise, but let me leave it for you as an exercise, okay? Exercise. If you impose this condition, del squared one over row to the power D minus three is equal to zero, then from this you can show, implies that to leading order in the one over D expansion, the norm of del row is equal to kappa, the extrinsic curvature of your surface, okay? Divided by D, plus O one by D, okay? So now, let's get back from this technical business. So this is not a difficult exercise to do. If you're interested in this large D stuff, you could try it out, you can come and discuss it with me if you find trouble showing this, okay? Now, where was I going? What did I do all this for? Let's recap. We went and blew up the small region and we found that the metric was knively sensitive only to the modulus of the norm of row. Now you could ask, is there a principle that determines it? And the principle is simply this. What we want is that any patch of the metric here be identical to the patch of a little patch about of some event horizon of some genuine short shield lack. Of course, because there's variation, it won't be completely identical, but a leading order in D, we want that to be true, okay? The variation only shows up at next order because as we've seen, this blown up coordinates is sent to distances D. This condition is true of all short shield black holes, irrespective of the center, the size, the velocity. So it's a nice condition. It's true of the metric for any short shield black hole and as we've just seen, it determines the norm of D row, which knively is the only important thing about the metric in this region, the only important variation there, okay? Now, there's one more thing. I told you that, for instance, the velocity field varied on length scale D. So its derivatives are some leading in the one by D expansion, but now you see that that's not completely true because you see, suppose you evaluate Einstein's equations of this thing, anything involving a derivative of velocity is indeed one over D, but there's a way of getting a compensating D factor by computing the divergence of the velocity. So while components of the velocity, something like U dot del of U is definitely of order one over D, this quantity here could be leading, in general, is leading order in order that this little patch actually solve Einstein's equations at large D. We must ensure that this quantity takes the value that it does in short shield black holes. But what value does it take in short shield black holes? Well, you can look at the simplest case of a short shield black hole at rest. U is constant, so del dot U is zero. Constant, the time direction, no curvature, okay? So now motivated by all these considerations, we have two additional conditions that we will impose on our row field and our U field. The first condition is that del squared of one over row to the power D minus three equals zero at on the surface. This condition only need apply on the surface because we only care about the region of size one over D around the surface. What happens elsewhere? You don't care. Moreover, del dot U is equal to zero on the surface. So these are the two additional conditions that we will impose on these row functions and the U functions. After imposing these two functions, you can carefully check, and at least in the case where you maintain this kind of isometry, in respect of what the isometry, the details of the isometry, this little patch, this little patch here, solves Einstein's equations at leading order in the large D. And it basically had to because we've engineered it so that this little patch is always the isomorphic up to terms that are sub leading one over D to a little patch of some short shield black hole. But of course, you can check this more seriously than I have done for you. You can check it algebraically in many ways. We've done those checks and it's true. Is this clear? Questions or comments about this? So the final claim is that if you start with this metric, if you start with this metric where all these conditions are obeyed, when you have a metric that A has an event horizon at row equals one, B decays exponentially to flat space at for distances row minus one of order one. See, solves Einstein's equations to leading order in large D everywhere outside its own event horizon. Because the only place where it was not obvious that it would do it is in this little membrane patch region. But now we have put enough conditions so that it also does that in that membrane patch region. And therefore, this metric is a good starting point for a perturbative construction of true solutions of Einstein's equations order by order and one over the extension. This is in some sense the most important part of what I'm telling you. The rest, of course, takes most of the pages of the paper and most of the work. But your more or less shirt will work out. Because once this has got it, perturbation theory has to work. Okay, questions or comments before we proceed. Okay, excellent. Okay, now there's one more little irritating kind of technical thing before we go on. Yeah, that I should tell you about just so that you know every element that goes into the calculation. And that's this. You see, you might think here that this function that we've been discussing, I mean, you might think it's true that this function is a function of all of space time. The velocity field is a function of all of space time. This might be jarring for you given what I told you in the first lecture. The first lecture I told you that we would be looking at space times constrained by data that live on a membrane on a co-dimension one surface of space time, okay? Whereas the metrics I've produced for you look like they depend on much more data. They look like they depend on data that are functions of all of space time, okay? But you already know from what we've discussed that there's something untrue about this. And then how do you know that? You already know that while, suppose I take, suppose I take two metrics such that they have the property that row of x, row one of x is equal to row two of x at, let me say this one clearly. Suppose I take two functions, row one and row two. These functions satisfy all these conditions. That is they obey this equation on the surface row one equals zero, row equals zero and row one is equal to one, row two is equal to one have the same solutions. So that these two different row functions define the same membrane surface. The zero set of the equation row one is equal to one is the same as zero set of the equation row one, row two is equal to one. This doesn't at all tell you that row one, row two are the same functions. For instance, row one could be, row two could be row one plus row one minus one plus row one minus one, the whole thing squared and so on. Okay? Suppose we have two functions that obey that have the same zero set for row minus one is equal to zero. Moreover, we choose two velocity fields, u one and u two, such that u one of a and u two of a agree is equal at row equals one. Since these two have the same zero set row equals one, I don't need to say whether it's row one equals one or row two equals one. Okay? Then the metrics that I create starting with row one, u one and row two, u two are of course different from each other. But how do they differ? Claim these two metrics agree with each other to leading order in the large d expansion. And differ from each other only at some leading causes in the large d expansion. Why is this true? What happens far away from here is totally irrelevant because both metrics go to flat space plus exponentially suppressed connections. So the only thing that matters is what happens nearby. And we've seen that the only thing that the metric to leading order is just determined by what u is on the membrane. And what row is and its first derivative is on the membrane. We've seen that this condition fixes the first derivative of row in terms of the surface data. Because it sets it equal to the extrinsic curvature. Trace of the extrinsic curvature. So the differences between the metrics that we look at only occur at first sub leading order to one by d. Is this clear? So although different functions row and u give you different metrics. Two as long as they have the same zero set for the equations row minus one is equal to zero. And the velocity feels agree on the surface row equals one. As long as that is the case, the two metrics differ from each other only at sub leading order in one by d. Why is that important? Why is this important is because we are, you're gonna use this metric, this configuration here as a starting point for a perturbative expansion of solutions. Now, suppose you give me a starting point to some metric and I take your starting point and add plus one by d times my favorite function to it. These are equivalent starting points for perturbation theory. Because when we do perturbation theory, we will be computing corrections to your metric and mine in the one by d expansion. And when I compute the first one by d correction, your one by d correction and my one by d correction will differ precisely by my favorite function. So as to give us the same solution eventually. You see, that's the nature of perturbation theory. Perturbation theory doesn't require you to guess a starting point exactly. It only requires to guess it correctly to leading order in your perturbative parameter. What you do to other orders, completely ambiguous. Two different starting points that agree at leading order are the same starting points for perturbation theory in the sense that if you compute the perturbation expansion to order five, all differences to order five will have been washed out. So it's too much of an overkill to start out with all of these functions of space time. What we should do is to choose some sample functions, okay? What we should do is choose the physical data, the data that distinguishes starting points and leading order in large d. What's that data? It's the shape of the membrane and the velocity field on the membrane, okay? So, but however actually to do calculations, we need actually to write out a metric. So to deal with this, what we do is to impose a set of subsidiary conditions, a set of conditions that completely determine the row function and the U function everywhere in space time, given their values on the membrane, okay? These subsidiary conditions can be chosen completely arbitrarily in principle. Of course, in practice, if you choose the right subsidiary conditions, you'll get much nicer answers than if you choose. We spent a month tuning our subsidiary conditions to make the answer look as compact as possible. The final conditions that we chose were as follows. We chose the condition to impose this equation del squared of row of one over row to the power d minus three equals zero, not just on the membrane, that we were forced to do, but everywhere. This condition plus the condition that the function dies off at infinity, row, sorry, row blows up at infinity. So one over row to the power d minus three dies off at infinity, uniquely fixes the function one over row to the power d minus three. It uniquely defines this function in terms of the geometrical surface on which row is equal to one. That's what we wanted. We wanted a definition of the function from the geometrical surface. The real data is the geometrical surface. Similarly, we impose the following condition. I'm not terribly sure why, what was natural about this, but it really made things very easy. N dot del of u is equal to one by kappa del squared by u, del squared of u. More precisely, this projected orthogonal to, this projected orthogonal to NMU. And this equation projected orthogonal to NMU. And then the equation N dot u is equal to zero, and u dot u is equal to minus one. These two equations were necessary on the membrane surface, but we chose to impose them everywhere in you. And then we had d minus two more conditions that basically told you how this u field evolved away from the membrane, okay? These subsidiary conditions on u and these on row can be shown, and for our procedure, we explicitly did this. Can be shown to generate the row function and the u function as a function of spacetime, order r in one by d, explicitly in terms of geometrical data. The geometrical data is the shape of the membrane and the value of the velocity field on the membrane. So we impose these constraints on our solutions just so that we don't do the same calculation twice, okay? We want one starting point for every genuinely new solution. This zeroes in a minute and makes things very convenient, okay? But it was, I emphasize, these conditions are totally arbitrary. If you had chosen other conditions, would be fine. You get the same answer in the end. In fact, we've written two papers, two published papers so far on the subject. In the first paper, we chose worse conditions. We got a worse answer. It's only with experience that we chose better conditions, better answer. Okay, maybe we chose the conditions again. Okay, now I've told you everything important. The rest is a lot of work but it's mechanical. What is the mechanics? The mechanics is now we've got this thing which we've got this configuration which obeys Einstein's equations to leading order in large d in this membrane region. We want to do better than that. We want Einstein's equations to be obeyed to first some leading order in one by d. Let's suppose that's our first goal, okay? How do we achieve this? We can just check, does this matter, does this, you can just plug this metric into Einstein's equations and ask, does it obey the Einstein equations to first sub leading order in one by d? The answer is no. Then what you do is to add corrections to this metric, one by d, explicit one by d suppressed corrections and try to choose those corrections so that it is obeyed to sub leading order in one by d. It's a long story about how you actually do this, okay? I'll spare you all of that and just tell you the answer, okay? The answer is, first, that you can find such correction terms to the metric. Oh, I shouldn't have to, anyway, sorry. You can find such correction terms to the metric if and only if, if and only if the surface and the velocity field are not arbitrary, but instead obey equations of motion. And the equations of motion, when I find them, are here. The equations of motion are del squared u by kappa minus del kappa by kappa minus u dot del u. Projector is equal to zero. Where the projector is the projector, let me be specific about that, PAB is equal to, is equal to GAB minus u, plus uAUB, where GAB is the metric on the membrane. GAB is the same thing as GAB is equal to etaAB minus NANB, okay? So this equation should be read as an equation on the world volume of the metric, of the membrane. U is a velocity field on the world volume of the membrane. K, well, in this case, yeah, in this case, it's actually so simple. It's so simple that, yeah, it depends only on the scale here, it's just a scalar, but in general, the extrinsic curvature is a tensor on the world volume of the membrane, okay? And this equation should be read as projected orthogonal to the velocity at that point on the membrane. First is the statement clear, that it is possible to correct the metric, and I'm gonna tell you that this correction is very simple, but it's possible to correct the metric to solve Einstein's equation, the first subleading order in one-by-date, if the initial velocity and shape functions that you started with were not arbitrary, but they obeyed this equation of function. Is this clear, you had a question? No, this is a Laplacian on the world volume of the membrane. From now on, every equation on the membrane refers to the metric on the membrane, but the metric of the membrane is the metric of this shape induced on the shape by eta AB. The metric is this, this is the metric on the membrane, flat space and the metric for the intrinsic Laplacian is the metric induced by the flat space metric on the surface, okay? So all vestiges of general relativity have gone away, okay? All we have is a membrane moving in flat space. It inherits a metric because there's a metric induced on it in flat space. This is the intrinsic Laplacian, and the extrinsic curvature is the extrinsic curvature. Any other questions or comments? Yes, is this, you see, as you will see, as you see, all of these terms look like terms from hydrodynamics. This looks like the convective term of Navier-Stokes equations, U dot del U. This looks like the viscous term of Navier-Stokes equations. This looks like a pressure term, okay? All of these terms look very familiar from studies of fluid dynamics. However, for interpretations hang on a little bit, it'll become clearer when I talk about the stress tensor of the membrane, okay? That I will only get in the lecture tomorrow. We will try to give interpretations, physical interpretations to all these terms, but please hang on for that, become a bit clearer when I talk about the stress tensor. Yes, gradients? No, gradient by itself del square scales like D, because the rule is contracted indices scale like D, but remember kappa scale like D. So this is order one, kappa scale like D, but kappa scale like D, so this is order one. Gradient is fine, no contraction, okay? U dot del is fine, okay? Because it's no sum over indices. It's UA del A, just the derivative in the U direction, okay, that's order one. It's like time derivative of U. Order one, order one, order one, order one. Okay, everything's of the same order because I've evaluated things only to leading order in the large day, leading non-trivial order. When I tell you about results at first sub-leading order, we'll find some terms which scale like one over D compared to other terms, but we've just not gone to that order. Any other questions or comments about this? Okay, so the first thing I wanna do now is to do some counting. You see by the way that's remarkably simple. Yes, it's pretty simple. The first thing I wanna do is to do some counting. How many variables did I have in my problem? Okay, number of variables, one shape. The shape function is the number of variables of one scalar function. How do I see this? For instance, if I wanna specify a shape, at least locally the way I can do it is by specifying a height function. Choose one coordinate as special and say what that coordinate is as a function of the other coordinates, okay? So one function of D minus one variables specifies a surface. So every time I talk about functions now, I may be talking about functions of D minus one variable because these are functions that live on the membrane, okay? So the shape is specified by one scalar function. What about the velocity field? The velocity field sounds like it's D minus one components because it's a vector field, yes? As equivalent to the shape function. Now, given the shape function, you can compute kappa. So I suppose if you knew kappa everywhere, perhaps you could reconstruct the shape function. I would have to think about that. But certainly same amount of data. If you cannot, it's by some discreet, yeah. Okay, so we've got one scalar function for the shape. What about the velocity? You might think that the velocity is D minus one degrees of freedom because after all, it's a vector field and a D minus one manifold. However, there are two things about the velocity field that we know. A, that U squared is equal to minus one. That kills one degree of freedom. B, that del dot U is equal to minus one. That kills a second degree of freedom. So how many degrees of freedom left? Somebody? D minus three. So velocity always functions of D minus one variable. Total amount of data, how many equations do we have? Well, this is a vector equation on a D minus one manifold. So it looks like D minus one equations. But there's a project here that removes one component of that equation. So it's D minus two equations. This is really important. We have as many equations as variables. Oh, a miracle seems to have happened. And the miracle is this, that these equations that came out as integrability constraints, the conditions that were necessary to be satisfied in order that we could continue our perturbative expansion further are exactly as many in number as there were free functions. What does that tell you? That tells you that at least from a counting point of view, the integrability conditions have gained a new role. They've become dynamical equations. Dynamical equations for the data that went into this construction. And that they, at least from a counting point of view, define an initial value problem for the data that went into the construction. The data being the shape of the membrane and the velocity field on the membrane. So these peace key integrability constraints suddenly have breathed in life into the problem. This integrability constraints tells you that we've got a well-defined dynamical system, at least from a counting point of view, for the shape and the velocity field on the membrane. Does it mean, okay, let me give you an experimental answer to that question. Yogesh, Shubhajit, anyone else here? A group of students at TIFR, students at postdocs, have continued this to the next order in one by D, and it works, okay. They'll be putting out the paper in a month or so, I think. This equation gets corrected, while this equation's one line, the correction is maybe two and a half lines. But it continues to work that if you correct the equation appropriately, you can continue the expansion. It actually gets corrected. This is exactly as it was in the fluid gravity correspondence. Okay, the logic is very similar. The ability to continue a naive perturbation expansion beyond leading order has some constraints. Those constraints seem irritating at first, but when you look at them, they're far from irritating. They give, they're exactly the right number to give dynamics to the elements that go into your construction. And so this perturbation procedure defines a one-to-one map between two different dynamical systems. The first dynamical system is Einstein's equations that were involving event horizons at large D under the condition that all variations are slow compared to one over D. The second dynamical system is this auxiliary problem, a problem of a membrane propagating in flat space, co-dimension one membrane, with a velocity field on it, a divergence-free velocity field, subject to this equation of motion, order by order in large D, at least in first order and we've checked the second order. If you solve this problem, you find a solution of the Einstein problem. And vice versa, you find a slowly varying solution of the Einstein problem, you find a solution to this problem. The equivalent dynamical systems subject to the constraint of the Einstein dynamical system has this slowly varying condition. Questions or comments? Yes, it is corrected. It is corrected. In fact, Yogesh and Shobhajit and collaborators have explicitly determined the correction. So I could write it out for you. I don't propose doing it just because what do you get from it? I mean, you get a lot when you want in detail but at this level, but it's explicitly available like we could show it to you. Okay? Now, the last thing I have to tell you is what happens, oh my. Ah, ah, ah, ah, okay. Two minutes, yes, okay. Oh, I've been slow. The last thing I have to tell you is what happens? You know, once this integrability condition is satisfied, what is the correction to the metric? So if you look at our first paper on the subject, we've got a three line correction to the metric. That's because we chose bad subsidiary conditions. If you look at our second paper on the subject, we've got a one term correction to the metric because we chose better subsidiary conditions. But though we've not published this with the subsidiary conditions, I told you about the correction to the metric at first order is zero. The leading order answers satisfied Einstein's equations at first subleading order, one by D, provided those equations satisfied, period. Okay? Things will be a little more complicated when we add charge to the problem. But since I've run out of time, I'll stop. In my last lecture tomorrow, I have many things to tell you about. I have to tell you about how to generalize this with charge. I have to tell you about solutions to these equations. I have to tell you about quasi-normal modes. I have to tell you about radiation. So tomorrow we'll do maybe seminar style, which means that I get to say it can be shown. Okay? Okay. A little louder so people can hear. Equations of high today. Yes. These are soap bubble type equations. You can ask. You see because you can think of this being some sort of soap bubble. So bubble has a shape, it's velocity. Okay? You can ask. You know, equations of fluid dynamics have been studied intensively for 200 years. Equations of soap bubble, no doubt have been studied, but less intensively I'm sure. Okay? I suspect that these equations have... Many of these things will have terms for people who are interested in soap bubbles, but I just don't know about it. Yep. But however, it's not some very well studied other structure we're making contact with, but it's a well defined dynamical problem of motion of something like a soap bubble. Oh, sorry. Surface, yeah. So the surface gravity in a stationary situation. Okay? In a stationary situation where surface gravity is normally defined, in a stationary situation, it turns out to be the following. I would have to tell you, okay. I would have to tell you more. We're gonna discuss this tomorrow when we look at solutions of these equations, stationary solutions. Other questions or comments?