 Physics of other forces, other things in a curved background, in a curved background space. But we haven't yet understood what action sets the dynamics for the gravitational field itself, for the metric that sets itself. That's what we're going to discuss now. Okay. So, in order to do this, what we need is some expression built out of the metric field and its derivatives that is generally co-variant. So, we can add this expression to the action and generate the equations of motion for action. Okay. Now, how are we going to make an expression in terms of the metric field and its derivatives that is generally co-variant? Well, what we're going to do is to appeal to geometry to help us make such expressions. Okay. So, remember that we had this notion of parallel transport. We had this equation with delta A mu was equal to gamma mu alpha theta A theta dx alpha. Do you remember that a small change in one form when we parallely transported from x to x to x plus dx was given by this formula? You can ask the following question. Suppose I take this little one-form field and transport it in a little loop so that you come back to yourself. Does this process of parallel transport bring the vector back to itself or is that a change? If there's a change, what is the change? Okay. So, the answer is going to be in general one, bring it back to itself. There will be a change and we want to compute what it is. Okay. So, now let's suppose that the size of this loop that we're looking at is of order epsilon. We're looking at a loop whose length scale is of order epsilon. We're going to do our calculation to the first non-trivial order, the first place where we see an effect. And that first non-trivial order turns out to be at order epsilon squared. Okay. So, we're going to do this calculation of this chain, take this vector around a little loop, bring it back to itself. The loop size is order epsilon. At order epsilon squared, we'll see that we don't come back exactly to ourselves, but do something else and we're going to compute what that effect is. Now, how do we do this? So, we've got a differential equation. It tells us how A changes as we go around a little infinitesimal. We have to integrate this differential equation, come back to ourselves along the path that comes back to ourselves, and that gives us the answer. There's a formal solution to such differential equations in terms of path-ordered exponentials, if you want to do it in some exact way. But such complicated things are not for us at the moment. We can just work simply because we're working toward epsilon squared. So, you see, as we start, we start at a point and we've got a little loop. Now, A, let's say that at this first point, at the starting point, A mu is equal to A mu zero. Let's say we're starting at the point zero for convenience. Okay. Now, A changes as we move along this path. This integral here has an integral over the length of the path. That integral, this integrated dx will give you a factor of epsilon. So, if we are interested in changes in A at order epsilon squared, we only need to worry about, I mean, the change in A as we come around the path at order epsilon squared. On the right-hand side of this expression, we only need to correctly input the value of A corrected at order epsilon. Is this clear? Remember, is that we can't just do this integral treating A as a constant because A changes along the path by this expression. A is A zero here. There's something else here, something else here, something else there. So, in order to compute this change along this path, we're going to need what A is along the path. But the great way of working in this order-by-order way is that we only need to know what A is at order epsilon to get the change right at order epsilon squared. Is this clear? So, firstly, let's first compute A at order epsilon. Well, so if we use the same recursive logic to get A at order epsilon, we only need to input A at order zero into this expression. Okay? So, at order, so let's suppose that A mu is equal to A mu zero plus epsilon times A mu one plus epsilon squared times one. We want to know what A mu one is. Well, A mu one, A mu one obeys the equation, so A mu one is equal to gamma mu alpha theta at zero. So, if we take, you know, the Taylor expansion of gamma, the first term in the Taylor expansion is at order epsilon. Okay? So, to compute A mu one, that's an overkill because we're already doing an integral, which is another order epsilon. Okay? Gamma mu theta alpha at zero times A theta at zero dx alpha. Since these are both constants, the solution is obvious. The solution is A mu one is equal to gamma mu alpha theta of zero A theta zero times x alpha. So, we have A mu is equal to A mu of zero plus gamma mu alpha theta of zero A theta zero x alpha plus blah, blah, blah. Okay? This is order zero. This is all, I mean, epsilon to the zero. This is order epsilon to the one. Everything higher will be epsilon to the two. Yeah, epsilon. Yeah, epsilon. Epsilon is a formal parameter. You can, in the end, set epsilon to one. It's just order counting parameter. Epsilon's there just for you to... In here. Epsilon's in your head. Okay? It's x or delta x that carries the power. Okay? In order to get this right to quadratic order, we also need gamma corrects to order epsilon. So, once again, we will get... That is simple. That we'll just take an experiment. Okay? So, gamma mu alpha theta is equal to gamma mu alpha theta This is an x. This is also an x. Okay? Alpha theta. It's called a del phi. X phi. Sorry. First the zero, please. Plus del phi gamma mu alpha theta. All derivatives evaluated at zero. Okay? I'm not going to indicate that explicitly. Just makes it to come to some. Times X phi. Plus higher order things at order epsilon squared. So, we don't need this. All we have to do is take this and plug it. So, let's do that. So, at order epsilon squared, we get that delta mu is equal to... Well, first we should ask what happens at order epsilon. Okay? So, at order epsilon, well, it's just a1 at zero minus a1 at zero. Which is zero. Because this is, you know, nice solution. Pass independent solution. So, you just get zero. What I'm saying is that the integral of dx over any closed curve is zero. Obviously. So, at order epsilon, there's no change. So, the first change is at order epsilon squared. Okay? So, what's the change at order epsilon squared? Well, we plug this in. So, we get one term is this term. So, del phi of gamma mu alpha theta, a theta x phi b x alpha. And the second term is in putting a1 here. Okay? So, plus gamma mu alpha theta. Now, then we want to put this with mu replaced by theta. So, gamma theta. We'll use something else for alpha. It's called a beta. And we'll use something for theta. Let's call it zeta. Now, we have to remember. There's a cheat sheet, mu went to theta, alpha went to beta, and theta went to zeta. This was an a theta. So, according to a cheat sheet, that becomes a zeta. And this was an x alpha. So, according to this, that was a beta. Okay? That became the x. And yes, dx alpha. While it's obviously true that the integral over a closed path of dx alpha is zero, as we got for the first order variation of a, is it true that the integral of x dx with lots of indices thrown in is always zero? No, it's not. Okay? This observation was made by many people, I'm sure, but, you know, it's essentially the observation of Stokes. This is, you know, Stokes theorem that tells you how you can take a vector field and what you get if you integrate it over a closed path. The answer is that it's the integral of the curl of the vector field over the area. We will say this in a slightly better way in just a moment. Okay? Now, at first order, we got zero because the vector field was a constant. So, it's curl with zero. It's the derivative of c. Okay? At second order, we don't get zero because the vector field is proportional to x. So, it's curl is not zero. The curl is constant, but it's not zero. So, all we have to do is to look at, to use Stokes theorem on this answer. Okay? So, in order to do that, let's rewrite this thing with indices. So, both of these have dx, alpha, dx with a dummy variable, alpha. But in this case, the dummy variable for x was beta. Well, in this case, it was phi. And here, we don't have a phi. So, we'll just replace beta by phi. We want to apply Stokes theorem. So, let me remind you what Stokes theorem says. Stokes theorem says that if you've got some integral like this, m alpha dx alpha over a closed loop, this is equal to the integral of del theta m alpha minus del alpha n theta dx theta dx alpha minus dx alpha dx theta divided by 2. What does this mean? You've got a little area element. You break it up into little boxes. The sides of the boxes have the two vectors dx1 and dx2. And you take this parallelepiped form from these two little vectors to make up this element. Okay? And sum over all the boxes. Compact that with this sum over all little boxes. Okay? This is, if you rewrite it in the way you're more familiar with, it's curl dot normal. Okay? Because the curl has an epsilon. Is this with an epsilon? And then taking that epsilon on this makes it the normal vector. Okay? But this is a better way to write Stokes theorem because it generalizes beyond three dimensions. It generalizes beyond. It's the natural way of writing the theorem. Okay? This is the wedge product notation for, this is the integral of a form for those of you who know what that means. We'll know what it means by the end of this course. Okay? Okay? Fine. So, now we're going to apply this, the byte 2 is because there are two terms here. So, when you dualize to the normal, you'll become twice normal and cancel the byte 2. Okay? Fine. So, now we're going to use this theorem here. We're going to use this theorem. We just have to identify what M alpha is. Well, so let's see. There are many indices around here. But most of them go along for the right. Like this mu index, as far as, as far as Stokes theorem is concerned, is going along for the right. See, what's important is that on the right-hand side here, we've got an alpha index contracted with the X alpha. Whatever that is, whatever else that is, there is on that alpha index, there will be some index, indices dressing these M. And those indices will just go along for the right. Is this clear? Please. It's just 1550, Stokes theorem. Yes. So, if we look for two dimensions, dx, d1, what is d1? Yeah. This is really dx theta 1 dx alpha 2 minus dx alpha 2 dx 1 1 2. We've got these two little vectors here. Okay? This is vector 1. This is vector 2. It's the area element along this little panel. Sorry, I shouldn't have. Okay. Excellent. That was a good question. Okay. So, this, by the way, this is what Landau-Liffchitz refers to as F12. It's the volume form on the hypercells. Okay. So, when you see F12 written in Landau-Liffchitz, this is what, or F alpha beta on our surface, this is what it means. Okay? Fine. Now, what we want to do is to compute this guy. So, what is M alpha for us? Just all the other indices go along for the right, but the alpha index was alpha there. Okay? We want to take the derivative with respect to theta. So, let me write this as phi so that the indices match. Yeah? The derivative of an expression linear and x just drops the x. Okay? So, this M alpha, del phi M alpha is simply this where we dropped the x. Okay? So, in our context, this del phi M alpha minus del alpha M phi is simply del phi of gamma mu alpha theta eta, but where I write eta is 0 theta. Okay? Plus gamma theta alpha gamma theta zeta phi. And then, according to Stokes' theorem, I'm supposed to subtract out this part. So, I'm supposed to subtract out phi going to alpha minus del alpha gamma mu phi theta minus gamma mu theta phi gamma theta alpha zeta zeta. Okay? So, the answer? Sir, please. Please. That M alpha dx alpha. Yes. We are matching with this, with this thing. Yes. Okay. So, then we are going to write the term for del phi M alpha minus del alpha M phi. Exactly. So, but here this phi thing is being contracted over there. It's contracted with an x, with x phi. So, now take derivative of this with respect to phi. Okay. Then, it just kills that. Okay. Fine. Fine. Now, this expression, this is our final answer. This integrated over the area. Okay? And this expression here, this expression here has a name. So, we will now get the conventions the same way. It's called the curvature, or let me say it more precisely. I'm going to give you a definition. So, let's define r iklm to be gamma i minus del gamma ikl plus gamma inl gamma nkm minus gamma inl gamma nkm. Okay? Then with this definition, if we've done everything right, it should be that delta a is ak. I'll write what lambda leaf should say, and then we'll check if we've got the same answer. Half r iklm flm by 2. Oh, and where's the a? Sorry, there's the ai. Okay. Let's check if that's working. So, firstly, this i index should be this i index here. So, that's this theta index here. Wait, something's... And then there's l here. Okay. Okay, let's check that this is right. Okay, let's do it properly. So, let's see. So, let me first write this guy in that form, and then we'll... Okay. So, what do we have? Our conclusion was... So, this is land-down issues. Okay? What did we conclude from here? We concluded that delta a... We were doing delta a mu, right? Yeah. So, we concluded that delta a mu was equal to... Okay. Now, gamma... Now, so what we should do is every time we have an a, we should contract with the same index. That was going to kill us, right? So, this... There was a theta here, and there was a zeta here. Zeta is not used here. So, let's make theta zetas. Okay? Zeta, zeta. Zeta, zeta. Okay. So, now we'll write zeta and a. So, now all the indices are good. So, let's write this as mu alpha plus... Write this. Minus del alpha gamma zeta mu phi plus gamma theta mu alpha gamma zeta theta phi minus gamma theta mu phi gamma zeta theta alpha e zeta. By 2 f, we had our phi alpha. That should be lm. So, I'm just going to replace in Landau-Liffchitz things. l and m by phi and alpha. Phi alpha, phi alpha... Sorry, sorry, sorry. lm. Phi... Now, which is an l and which is an i? Upper is i. Yes. Phi alpha, phi alpha. And then I should replace i and k, if I want to compare here, with zeta and mu. So, i and k is zeta and mu, zeta and mu. Zeta k is mu. i is zeta k is mu. i is zeta k is mu. i is zeta k is mu. Okay? And if we've done everything right, this should just match this now. Del phi mu alpha minus del alpha gamma zeta mu phi plus... The theta is something... is n. That's okay? Yeah, it's fine. Okay, we've written... So, we have the same expression as Landau-Liffchitz. Okay. So, what have we concluded? We've concluded that the change in this one form, as we go around this little loop, is given by this expression where this object here is what's called the curvature tense. Okay? So, one of the things we've learned is that parallel transport cannot be unambiguously used in general to define a vector field or a one-form field from a given vector at a point. You know, suppose I've got a given vector at one point or given one form at one point. Now, I try to say, let me define a field parallely transporting this vector to every other point in space. That process is ambiguous. It depends on what path you use for the parallel transport. In general, because different paths give you different answers as we've just seen. Okay? So, parallel transport less universal than you might have thought. It depends. It's path dependent. Now, the fact that there is curvature also tells you that covariant derivatives don't commute. Oh, something I should say, by the way, that this curvature here is obviously a tensor. Why is it a tensor? It's because this object in just the byproduct of two infinitesimals is obviously a tensor with upper indices. And curvature and delta ak is the difference between two vectors, two one-forms at the same point. So, that transforms like a one-form. Only way that can work is that this transforms like a tensor with all its indices, you know, with its indices indicated. Curvature is built out of gammas. Gammas contain derivatives of the metric field. So, we've succeeded in identifying a tensor that is built out of derivatives of the metric field. Okay? So, we've succeeded in our task. And we're going to use this R to build Einstein's action, to build the action for the gravitational field. But before we actually do that, there's some more gymnastics I'm going to do. Yes? So, is there a point considering the higher order terms in delta? Yes. Okay. The life can be iterated. If you want to get the whole answer, it can be written as a local expression. Meaning, the answer is complicated. But all the information is there in local curvatures. You see, it's like an integral. Okay? Suppose you want to know how much something changes from here to here. Okay? So, it changes over an infinitesimal little bit. Then you just add it up and you get the answer. It's the same thing here. What we've determined is how much something changes over a little area element. Now, you want the change in a big loop. You just add up the changes over the little area elements. The only complicated thing is that you're going to have to use the appropriate value of A. It doesn't matter what you're going to get. A loop can get close by itself. Yeah. It doesn't matter when you use the appropriate value of A. Okay. So, let's address this question in a little more detail. Suppose you actually wanted to compute. Suppose you actually wanted to compute what you would get over a big path. What you would actually do is take that original differential equation we wrote down and try to solve it. So, the original differential equation was delta A mu is equal to gamma alpha mu phi. So, one way of proceeding is to take this and just solve it along the path. Solve that differential equation along the path. Now, we could apply Stokes theorem. You see, this thing here also has the form integral of one form along a path. So, we could apply Stokes theorem to this whole thing here. Okay. And yeah. So, the complicated thing there is knowing what to put in the middle. Because you're going to have to find a field and A field. Gamma, of course, is defined everywhere. But A is not defined everywhere. You see, A was just defined at a point. So, what you're going to have to do is to find an A field. That has the correct values on the boundary. And the values on the boundary are determined by this differential equation. Once you do that, having done that, what you will get is the same expression locally done. But with the local value of this A. Do you understand what I mean? So, suppose I've got some A field, whatever it is. You find an A field that agrees with the values on the boundary. Agrees with the value that you would get by solving this differential equation on the boundary. Suppose you have any A field that does that. Yes, yes. The A field is totally unique. Sorry. No, no, no. The A, the boundary is unique. The filling up is up to you, I think. But then you're going to change the R. No, don't change R. R is a geometric quantity. What you will have to argue is that all these ambiguities drop out of the final answer. But just one way of doing it. Just one way in which we'd already tell you something important. So let me say it. One way of doing it is find an A field that fills up the surface, agrees with the right values of the boundary. And then with that A field, we just apply Stokes theorem. So we would get this with an integral as a finite expression, I think. I'm going to have to think about this. Let me think about this and give you a better answer. One thing that I do want to say, what I claim though is that if R is 0, then this will be 0 even for finite parts. This is obvious, right? Because for any little bit, it's 0 and so it adds up to 0. To get the actual answer will be complicated. But your question is willing to identify new geometric invariants. I don't think so. It'll just be in terms of R and maybe derivatives of R. R as a local field. OK, good question. I'll give you a better answer to that question. OK, excellent. Fine. So we've succeeded in our program of trying to, you know, the question. We'll talk about your question. D of something meaning what? Exterior derivative of something. No, it's not an exterior. R is not an exterior derivative. You know, exterior derivative applies only to forms, only to objects with completely anti-symmetric indices. OK. This is not something of. No, so it's not an exterior derivative. Yes, that is a Bianchi identity, which we will come to. Yes. Yes, yes. There is a sense in which you're right that it's an exterior derivative. We'll come to this in this class, hopefully. OK, excellent. Fine. So this has already given us a quantity that is built out of derivatives of R, out of derivatives of the metric that we can use to build Einstein's actions, and we will. But before we do that, there are a few more algebraic gymnastics that we want to do in order to identify more clearly what role this R plays. The next question I'm going to ask is, suppose we done this exercise with a vector field rather than a one-form field. OK. What would that answer be? OK. Now, if you remember when we started discussing parallel transporter vectors, we deduced how a vector would be parallel transporting in terms of the definition of how a one-form must be parallel transported, just from the observation that a scalar cannot receive no parallel transport. It doesn't change under parallel transport. OK. So we're going to use the same idea. Suppose you've got bi times ai, or b mu times a nu to stick w to. Under parallel transport, around a little loop, this is obviously 0. Scalar doesn't change under parallel transport. But 0 must be made up of two parts. 0 must be equal to delta b mu times a mu plus b mu times delta a mu. Delta a mu we already know. It's given by this expression. So let's plug that in. So we get that minus delta b mu a mu is equal to, OK, and now we've got this li business. So let's, this is delta a k. k, k, k, k, k, minus delta a k times a k is equal to what? Is equal to b and this now we'll call something else, some k prime. Delta a k prime. So that's equal to r i k l m a f l m a i. This one was a k. Here we had an i. There's already a k here. What is this k? That is k prime. OK. So let's make this k. That allows us to identify delta b k as equal to minus r k k prime l m f l m by 2 a b k prime. Excellent. Degrees with lambda solutions. OK. So we've deduced not just how one form goes over a little loop but also vector changes. Now we can do this for arbitrary tensors. We can do it like we did in our discussion of parallel transport. We're using the fact that any tensor can be built as a sum of outer products of vectors and one forms. OK. The answer is obvious and is listed in lambda-olive sheets. I'm going to ask you to check that. It's just the sum of things. OK. Change of the first tens. Index by index. Just like it was for parallel transport. OK. Not even going to bother to write it down because it's a big expression. Is this OK? You can look it up in lambda-olive sheets. Excellent. Fine. So we understand how everything changes under loops of parallel transport. Fine. Now the next question I'm going to ask is, is there any other important significance of this curvature? And that is, you see, what is the point? The idea is that if you take a vector, you move it parallel here, here, here, here. You don't come back to yourself. But moving a vector like this has something to do with differentiating a vector coherently because that parallel transport there appears in the definition of the differential in this direction. Whereas moving a vector like this has something to do with differentiating it in this direction because differentiating it here involves moving it parallel in this way. Now this appears in the derivative that way. Whereas this will appear in the derivative the other way. Something like the inverse derivative. So this loop here has to do with what you will get by differentiating this way and differentiating this way minus the other way round. You see. This way and that way. OK. This loop can be thought of as this minus this. And this way is on the x-axis minus then y-axis. This way is y-axis then x-axis. OK. So just thinking about this in this geometrical way makes it seem like this curvature will have something to do with what you get from the commutator of covariant derivatives. I mean derivative derivative minus derivative derivative. Two different directions. We're going to now try to make that precise. OK. This I emphasize is important. Ordinary derivatives commute. Del x del y is same as del y del x. But covariant derivatives in general do not commute as we will see because the covariant derivative involves this gamma business. OK. And what are we going to get? So let's just work it out. I guess we're going to work it out really honestly. And then we'll use further calculations. We'll use tricks. So let's work with one form. Suppose I look at del alpha of a mu. What was that? This was equal to d mu of a alpha minus gamma mu theta zeta a z. The alpha of a mu minus alpha zeta mu. Now let's compute del beta of del alpha of a mu. What is this quantity? So this quantity by itself is an object with two indices down. So we have to use a formula like this, but with two such terms. Is this clear? So what are we going to get? So I'll write it down in two steps just to be... So those steps are totally clear. We get del beta of del alpha a mu minus... Now let's take the term where the a mu goes along for the right. So that's minus gamma beta alpha zeta del zeta a mu. And then there's the term where the alpha goes along for the right. So minus del beta of del alpha... That's for the gamma. Gamma beta zeta and this mu del alpha a z. This is what this quantity is. And now painfully we plug in what this is. So once we're going to do it in complete detail. From the next time we'll use some tricks. So let's plug it in. So that is equal to del beta of del alpha a mu minus gamma alpha mu zeta a zeta. Minus gamma zeta beta alpha into del zeta a mu minus zeta mu pi. Okay good. Minus gamma zeta beta mu into del alpha a zeta. So del alpha a zeta minus gamma zeta alpha a pi. Painful enough. That's what the expression is. Now what we're interested in is looking at this covariant derivative minus the opposite. So this expression that we've written down is this object. What we're interested in is this minus what we get from alpha and beta exchanged. So let's look at these terms. This term, the term with del beta del alpha is symmetric. So under interchange we cancel. This term here, anything that we get from this whole expression here is also symmetric under beta to alpha. Because gamma is symmetric with slow indices. So this object also cancels. And this term here is also symmetric under beta to alpha. Some dummy variable flip. So that also cancels. So the terms that we have to keep what remains. So the terms that we have to keep are this. Let me circle the terms we need to keep in orange. This one we need to keep. These we don't need to keep. This one we need to keep. Okay. And both of these nicely have the dummy index for a being zeta. So the terms that we're keeping are minus of del beta gamma zeta alpha mu gamma zeta beta mu. Wait, what happened to my... Sorry, I messed up on my... Sorry, I messed up on something. Just a minute, just a minute. What happened to the terms that... This is del zeta alpha. This is del zeta alpha? Zeta alpha. Is that right? I'm going to need to check that one minute. And I think I messed up on something. Let me just write what I had before. Del alpha zeta. Okay, just one minute. Just one minute, just one minute. So this del beta here... Whatever I've done. I've not kept the del beta acting on zeta. That's what I've not done. Okay. That's what I've not done. And this is coming with a minus sign. Sorry, people. Just one minute. So in this, of course, there's a term where the... Where... I'm sorry, I'm sorry, I'm sorry. I'm getting confused just one minute. Let me just step back. I messed up. So what did we do? Del alpha a mu is del alpha a mu minus this. This is correct. Now we took the covariant derivative of this object. This expression as far as I can see is okay. And then we plugged in what this was in each of these. Now it's certainly true that this cancels. There's no reason... That this is symmetric. No reason for this to be symmetric. This is certainly symmetric so that goes, right? The last term was not symmetric. Maybe the last term was not symmetric. Sorry. Let's look at that. So this was... Let's look at that. This term was a zeta. Yeah, I'm sorry. You're absolutely right. This term was not symmetric. You see, it was not symmetric. It was one of these gammas especially. Because one of them has an index that contracts with the... Okay. This term minus that term is going to give us the gamma, gamma minus gamma, gamma and curvature. But now this term and that term should have cancelled. The term that we had with derivative acting on just the e. What's funny me is that we're getting the same sign. So firstly, is it the right form? So we've got del beta a zeta. That is alpha del beta a. So the term coming from this... You're saying that this will make it symmetric. That's probably true. No, this will not make it symmetric. You have to write alpha. This is only del beta del alpha a. Then you have to subtract del alpha del beta a. I understand. So you're saying that the sum should vanish. It will vanish if this is symmetric. If the sum of this and this is symmetric in alpha beta. Let's check that. Thank you. Yeah. Sorry. So I've kept the right term, I think. But what I've not kept is this guy acting on this guy. And this guy acting on this guy. Let's check that. So that's... What I've not kept is del beta minus of del beta a zeta. Gamma zeta alpha mu. Del beta a zeta. Plus gamma zeta beta mu. Del alpha a zeta. Okay, it's perfect. As you said, it's symmetric. So under subtracting alpha goes to beta. That kills that. Okay? So when I circle this, I'll circle it in a more clever way. I'll circle that. So we'll have the derivative acting only on this. Okay? This part. Again, I uncircled this. But I circled that. Now we're in business. Thank you. Thank you. Okay. So now we're in business. And so what's our answer? We got... The term we wanted was... Okay, we might as well subtract alpha to beta immediately. So minus del alpha gamma zeta beta mu. Okay? And here we will get minus gamma beta... Minus is not there because overall outside. Minus beta mu zeta. Oh, minus into minus is plus. And I've taken a minus outside. So this is minus. Minus beta mu zeta gamma zeta alpha phi. Now we wanted the index which contracted A to be zeta. There's no phi here. So we'll make that phi appear. The whole thing will multiply. Okay? And then we'll take the other way around. Gamma alpha mu zeta gamma beta zeta phi. Okay. Now this is claimed to be... What we should have got is... Okay. So del beta del alpha of a mu minus del alpha del beta a mu should have been... beta z... My cheat sheet beta is equal to l. l is equal to beta. k is equal to alpha. mu is equal to i. Should have been equal to a m. r... Okay, we've got a phi. phi r phi... i is beta. k is alpha. And... Sorry, just one minute. l. That's l. l and... Sorry, sorry. i k l. i was mu. k and l are alpha and beta. Okay. So Landau-Liefschitz claims that this should have been our answer. If that's correct, it must be true. This is the answer if this expression here is this. That is... And with what sign? With what sign? The way I've written it, with a plus, it should have been... So if this is right, we should have that r phi mu alpha beta is equal to del alpha r phi beta mu minus del beta a pi plus gamma zeta beta alpha gamma pi alpha zeta minus zeta alpha gamma pi beta zeta. Okay. And now I'm going to check the definition and see if this works. The one with this index should have... Yeah, okay. I'm getting it. I'm getting both of them to the right side. Okay. This is... This is correct. Okay. So we found the right formula. We found the right formula. One of these indices should be mu. One of the... Oh yeah. There's something wrong with the indices here. How is that to go? This term came from here. First term beta beta. Zeta beta mu. Thank you. And this term came from here. This is it. I checked that... It's clearly the sum of these two terms and these two terms. I think I checked just now that I got the same signs as slant additions. Okay. So we've got... I'll urge you to check this carefully. Okay. You do it right in a second. Okay, excellent. Fine. So we've concluded that the difference in covariant derivatives, difference in the anti-symmetric part of two covariant derivatives, acting on an object that transforms, okay, is not zero, but is given by this formula. Okay. This is another way, of course, of seeing that this R is a tensor, because the covariant derivative was built to be a tensor. Difference between two covariant derivatives is also a tensor. A is a tensor. Therefore, this must be a tensor which transforms in the right way. Is this clear? Excellent. Fine. So now we understand many things about the curvature. Now, this curvature tensor, this curvature tensor is the basic geometric object that measures how much space-time deviates from flatness. Let's try to understand that statement. Firstly, suppose space-time was flat, then clearly curvature is zero, because it is possible in flat space to choose the usual coordinates, x, y, z, d. All gammas are zero in those coordinates. Therefore, derivatives of our gammas are zero. Therefore, curvature is zero in that coordinate system, but curvature is a tensor. So it's zero in one coordinate system and zero in every coordinate system. Okay. What about the converse? Suppose the curvature is zero everywhere, then is it possible to choose coordinates in that space-time, at least locally in the finite local patch. There could be identifications. There could be a circle. You could have things that are locally flat, but not globally identical to flat space, but ignoring such subtleties, in a little finite patch, is it possible to choose coordinates such that in that little local patch, the metric is just eta mu nu. Okay. Claim, yes. Let me give you the proof. It's a proof that can be given just in words without any equations, more or less. Okay. As we discussed, when curvature is non-zero, it is not possible to take a vector or a one-form at a point and define a vector field or a one-form field by parallel transport, because what you get going to another point depends on the path that you take. However, when curvature is zero, then all parallel transports through different paths gives you the same answer, but you don't go through some topological obstruction, just locally at least, some local patch. Therefore, the procedure of defining a vector or a one-form field by parallel transport away from a point is well-defined. Okay. So, now let me consider a manifold in which the curvature is zero. At a point, I go to coordinates such that the metric is eta mu nu at that point. So, I've got a special point here. I move to coordinates such that the metric is eta mu nu, the second one we do. Okay. And in this coordinate system at that point, I look at the one-form field. Let's say Mu is equal to, for instance, one, zero, zero. One in the time direction. Zero in the space directions. Let me start with this graph. I look at the one-form at that point. And then I define a one-form field by taking this one-form and parallely transporting it all over space. Okay. This gives me a mu and I'll call this T as a function of x, some one-form field. The T is to remind me that I started with time zero, zero, zero. In a similar manner, I define the one-form fields a mu of x1 as a function of x by parallely transporting zero, one, zero, zero. Okay. I define a mu of x2 as a function of x and a mu of x3 as a function of x. So, I've got these nice fields defined all over my new code. Now, I want to set up a coordinate system. Okay. At any point. Okay. So, I have my special point which is going to be my origin. Okay. And now, given any point, I associate to this point the coordinates T, x1, x2, x3 as follows. T is equal to integral a mu of T, dx mu from zero to that point. Similarly, x1 is equal to, is equal to integral a mu of x1 dx mu from zero to that point and so on. Now, some of you are asking but along what path? Answer doesn't matter because the curvature vanishes. Coordinate system. It's a coordinate system in which these one-form fields, a mu of T is just 1, 0, 0, 0 in this coordinate system. The expression for a mu of T is just 1, 0, 0, 0 everywhere. The reason for that is that 1, 0, 0, 0 dotted with dx mu has the property that that's dt. But our definition of T was that the change in T from one point to another is the integral of a mu of T dotted with dx mu. So, putting those two statements together, it follows that a mu T as a field in this coordinate system is the expression 1, 0, 0, 0 everywhere. Okay. Now, similarly, a mu of x1 is 0, 1, 0, 0 and so on. Now, remember that the a mu field was defined by parallel transport. By definition, a vector field defined by parallel transport satisfies the condition that the covariant derivative of this vector field is 0. You see, because the covariant derivative is take this guy, parallely transport it back. Suppose I want to define the covariant derivative of a field. What I have to do is to go to some other point, parallel transport this back. But, you know, we're just undoing the parallel transport that we did to get here. So, we just get 0 by taking this. So, this vector field, these one-form fields that I've defined by definition are covariantly constant. But in this coordinate system, the derivatives are also constant. Okay. So, that tells us that gamma, so let's work it out for this guy. Okay. So, saying is that of a mu T is equal to 0. We know this from construction. D mu of a mu of T is equal to D mu, now we work in the special coordinate system, is equal to D mu of a mu of T, but that's 0, plus gamma mu alpha theta a theta, a theta is just the time component. So, that's a T. Okay. Since we know that this is 0, that tells us that this is 0. But this argument was true for a mu of T, a mu of x1, a mu of x2, a mu of x3. That tells us that all gammas are 0. So, we've gone to a coordinate system in which all gammas are completely 0. Okay. And in this coordinate system, it's also true, I probably didn't even need to do this. It's probably even easier. So, it's probably obviously true that the metric is just eta mu nu. Yeah, that was probably the easiest thing to do. Sorry, Sagan? Gammas are 0, but if you remember, we got these gammas by... Okay, good question. It's a question of symmetry. Now, you see, we got these gammas by inverting this. Let's do the counting. How many independent gammas are there? Okay. The number of independent gammas are 10 into 4. 10 because symmetric down and 4 up. How many independent derivatives of the metric are there? Okay. 4 into 10 is the 10 components of the metric. And if you remember how we found the expression for gamma, we first had expressions for derivatives of metric written in terms of gamma, and then we inverted those equations. So, if gammas vanish, the derivatives of metric also vanish. Okay. Okay. Just the last step to complete the argument, I want to argue that... Just when there's one step left to complete this argument. And... Okay. There's something more direct to say, but anyway, now that we've shown that all gammas are 0, the metric is constant. That follows because... Okay. And if the metric is constant, we know what it is because it was eta mu nu at one point, and therefore it's eta mu nu everywhere else. There's probably a more direct way to argue that last statement. Anyway, we've shown that. Okay. So, if we have a coordinate system in which... I mean, a space time in which the curvature vanishes at least over an open neighborhood, then we can set up a coordinate system over that open neighborhood in which the metric is just eta mu nu. Okay. So, that tells us that curvature, this curvature tensor measures the obstruction for a space time to be flat, to setting up a coordinate system in space in which the metric is eta mu nu. If the curvature vanishes, the space is flat. If the space is flat, the curvature vanishes. So, now... Now... Okay. Okay. We've gone slower than I had hoped. Okay. Now, how do you think about... How useful do you find it for me to do algebra and class? If I... It's cheating. Okay, I wonder that. There's some temptation to say it can be shown that. We don't want that, right? Okay. Okay. No, no. We don't want that. Fine. So, let's move on. What I'm trying to head towards is building Einstein's action. But we need a few more definitions and a few more things about this curvature tensor. Well, the first property that we want to show about... So, there are two things we want to understand, two additional things we want to understand about this curvature tensor. The first is the one that Arashan talked about, namely, it's Bianchi identity. And the second, which we'll actually talk about first, it's something that's even more basic, even more elementary, that is the symmetry properties of the curvature tensor. Let's start talking about that immediately, though we're not going to get very far in this discussion. Let me tell you what the properties are, firstly. I'll tell you what. We've got five minutes or three minutes left. We're not going to finish the discussion of the symmetry of the curvature tensor in this much time. Well, instead I'm going to do the proof of the symmetry properties. What instead I'm going to do is just tell you the properties, which we'll prove next time. And then use that to do some counting, which we can do quite simply. So, there are three properties of the curvature tensor that are important and interesting. Then don't involve derivatives of the tensor, of this curvature tensor. So, we have this tensor R, A, B, C, D. Now, it's convenient in order to discuss... You know, it makes no sense to talk about symmetry properties between an upper and a lower index, because that's not coordinate invariant. Upper indices transform differently from lower indices. So, if you say there's an interchange symmetry, that can only be true in one special component system. But it makes sense to talk about symmetry properties of two lower indices. So, the symmetry of this object is maximally revealed when we lower this index as well. So, let's lower and look at the object R, A, B, C, D. You know what this is in terms of this, the metric look at it. Now, the thing we're going to do to start our class off next time is to show that this object, R, A, B, C, D, is anti-symmetric in interchange of C and D. Anti-symmetric in interchange of A and B, but symmetric in interchange of the pair A, B with C, D. So, let's write that down as an equation. So, our A, B, C, D is equal to minus R, B, A, C, D is equal to minus R, A, B, D, C is equal to R, C, D, A, B. And then, of course, by playing around, yeah, you can generate more identity. But the thing to remember is that it's anti-symmetric in its last two indices, anti-symmetric in its first two indices, anti-symmetric in the interchange of the pair, okay? And there's one more thing that we're going to prove. There's one more thing that we're going to prove in the next class. And that is that if we freeze one of these indices, let's say we freeze this index A, and then we cyclically permute the B, C, D index. R, A, B, B, C, D plus R, A, B, B, C plus R, A, C, D, D, B. This is equal to zero. And this is true for arbitrary indices, arbitrary choices of ABC. These three things we will prove in the next class using some algebra. I don't want to start it now because it's stupid to interrupt algebra because you have to start it again next class. But using these properties, I'm going to ask the following question. I'm going to ask the question, how many independent components are there in the curvature tensor? So how many independent functions do you have in the curvature tensor? So firstly, if you ask the question, how many independent functions were there in the metric, the answer to that question was 10, right? 4 into 5 by 2. Okay? Now, let's try to get... Can somebody help me to do the counting? First, ignore this relation. If I ignore this relation, what would the answer be? Not quite. Unless you're in a different dimension. Tell... We can talk it aloud. Okay, so I'm going to break this question up. How many independent anti-symmetric pairs are there? No. How many independent components in an anti-symmetric 4x4 matrix? Six. Six, right? How many ways can you choose two pairs? 4 into 3 by 2. So there are six of these. How many here? So that was why he said 36. But the symmetry between interchange. He's not accounted for that. How do you account for a symmetric? 21 by 2. Well, it's good. What's the answer? 21. 21, fantastic. Six in a 7 by 2. The number of elements in a symmetric 6x6 matrix. Six in a 7 by 2. So that would be 21 elements. Okay? Now, what cut-down do we get from this relation? That's not so obvious, so let me help you work it out. The first thing is to understand when this relationship gives you something new beyond what you already knew. So this is true for all choices of indices. Let me see. Suppose I choose B, C and D all to be the same index. Then I'm claiming that there's no content in this thing. This is obvious. Why? Because these two are the same index, but it's anti-symmetric in this index. So it's zero. Great. Now, the next thing I'm going to do is to choose B, C and D to have two indices common. Let's say C and D are the same index. Let's see what I get. The first term is obviously zero because C and D are the same. Okay? What's the second term? Second term is R, A, B. Sorry, R, A, C, B, C. Third term is R, A, C, C, B, D, C. Anti-symmetric of this one, this makes this obvious. So this identity, when any two of the indices of B, C and D are the same, is not a new identity. Okay? In order to potentially get something we didn't already know, we haven't already accounted for an accounting, we have to choose B, C and D to be all distinct. Now, let's suppose we choose A equals one of these. Let's say B. Let's see what we get. So we get R, B, B, C, C, D, that's zero. Plus R, B, plus R, B, C. Everyone, what can you say about this pen? It's zero. Because this is equal to R, D, B, B, C is equal to minus R, B, D, B, C. So this is also automatic. We haven't learned anything new from this new identity that we didn't already know from the symmetry properties. If B, if this A index is one of these three indices. So the only time we learn something new is if all four indices are distinct. How many ways are there of choosing four distinct indices out of a set of four? One. So the number of independent components in the curvature is therefore 20, 20 minus one is 21. In the problem set, which will actually appear soon, we'll give you a problem in which we'll ask you to do this in D dimensions. I'll ask you to do the counting for how many independent components of the curvature tensor that are in D dimensions. Very good. Excellent. So the number of components in the curvature tensor is 20. Now just to end this lecture, I'm going to just write down Einstein's action. Oh, so there are two, three more definitions. We've now got this nice four index object, our ABCD. By the way, what order in derivatives? What is the highest derivative of G that appears in the curvature? Second order. Because gamma has a derivative and there are expressions which are derivative of gamma. So it's a two derivative term. That's ideal. In classical mechanics, we like actions with two derivative terms. There is lots of experience so far. It's a structure of phase space, two time derivatives, position momentum. It feeds nicely into quantum mechanics. So that's sort of ideal. So what we want to do is to take something built out of R and put it into the action. You might worry that having two derivative terms in the action is not what we want. That we want one derivative and one derivative squared. You see, the net total number of derivatives in R is never more than two. It's not like you have two derivatives of a metric multiplying one derivative of a metric, which would give you a three derivative to term the equation of motion. I mean, the equation of motion will obviously be a two derivative equation of motion, which is that if you build an action out of R, as we will see in more detail, which is what we want. But of course, R is not a scalar. So what we want to do is to build some scalar out of R and we can add it to the action. How do we do that? Well, clearly what we want to do is contract. Now, let's first see, can we get a two index object by contracting R? Can you think of a way of getting a two index object by contracting R? Are there many ways? What if I contract A and B? What do I get? Zero. Because this is an anti-symmetry. I contract C and D, I get zero. So what are my options? B and C and which one I choose doesn't matter because up to a minus sign, they'll all be the same. So I make another definition. This object, I define the Ritchie scale or whatever, Ritchie tensor. Rik is equal to glm, Rli mk. Let me make this A and B. R, A, B, G, B. Okay. You contract in Landau-Liefschitz conventions, you contract the first and the third index. And you think of this as a function of the second and the fourth indices. Okay. Now, what symmetry properties does this Ritchie tensor? Symmetric. Symmetric. Because if I make, if I do kA, I'll do, where's my k here? I'll do kA here. And then this is just dummy variable. Interchangeant. Okay. So this Ritchie tensor is the unique two index tensor you can get by contracting indices in the Riemann tensor. And there's a symmetric tensor in its indices. How many independent components in Ritchie? There are no further identities for Ritchie. How many independent components? Ten. Ten. Four into five by two. Excellent. Now, out of this two index objects,