 Just to remind you, in the last class we were talking about covariate differentiation. We started discussing the notion of parallel transport of vectors of one form. In order for us to be able to differentiate things in a covariate manner, we need some object that helps us to transport vectors of one form out of the tensors. Our object was called the Stompelsim, gamma mu alpha beta, and the operator formula was delta A mu is equal to gamma mu theta beta m beta delta x theta. So this allowed us to transport the one form A mu from x to x plus delta x. And then the view is that if this was the right rule for an index down, the correct rule for an index up was equal to gamma mu alpha mu A mu theta delta x alpha. This had a plus sign, this had a minus sign. And although this distinction is going to be completely unimportant in ten minutes, at the moment please remember that the right index, this gamma has one index up and two indices down. The index on the right contracts with the spatial separation, while the index on the left is associated with the vector. I'm going to say for much of what we're going to do in this course, this distinction will soon be moved next to here because we will see that we will, okay, we'll see. Let me also talk about how we can parallel transport objects of many indices up and many indices down, just by compounding this rule in many times, next by index. Yeah, we'll try to get the formula again. And finally, we discussed how we could define the covariant derivative of, let's say, one form. Okay, and the covariant derivative of the formula was then mu over alpha was equal to dv over alpha minus gamma minus gamma alpha mu theta. The index having to do with how much you move in space is on the right. The index having to do with vector is on the left. As I said, this will be important, is important for semantics at the moment, I mean for logic at the moment, and is actually important in general relatively when you have fermions. For most of what we do in this course, this distinction is important. Okay, right at the end of the last class, we then used this, we said that this covariant derivative has to be, has to transform covariantly under a coordinate change. That's why we introduced it. So this, we used this to deduce what transformation probabilities would mean then for this gamma circuit. And what we concluded was that under a coordinate change, gamma tilde alpha beta gamma, that must be, goes to what? It's equal to what? It's equal to all the transforms in the text. Okay, so del x, zeta 1, del x, zeta 2 by del y, beta by del y, gamma del y by del x, alpha, that's called zeta 3. And then gamma zeta 3 and zeta 1, zeta 2. This is how the transformation will have been if this gamma tilde was a tensor. And as usual, you must remember that this guy is a function of x of y. But I don't write that unless we need it, just to keep the formula. Okay, but in addition to this, there was one more piece of transformation rule and that was this. So the piece that went like del x, alpha by del y, 2x by del y, beta, del y, gamma, okay, not exactly that. It was del x, zeta. And then this was compounded by del y, alpha by del zeta. You think of enogy at this point. If the shift in this is not proportional to gamma, this part is just a rotation of the original gamma, like it is. This part of the enogy. So we've got this object, gamma, that makes it fine. And that must await this transformation problem. Now, all this so far sounds like abstract mathematics. You know, it's mathematicians who say, let us introduce an object, gamma, into our theory that has this transformation property. It's not the typical way this is supposed to be. You know, for good reason. You should be very uncomfortable with all of this. And this discomfort should leave you only once you have an explicit formula for gamma in terms of the fields of the problem in the metric. That's what we're shooting at. Or an explicit statement that there is a new degree of freedom in being apart from the metric. That would be a possibility, but it would be less economical. And you know, in a beautiful extent, theory is that it's neat. Okay. So, now let's look at this transformation formula for the stuff. The first thing you know is that there's a tensor part, and then you know what it is. But the inner homogeneity part is manifestly symmetric. It'll be done in the bottom two indices of gamma. What does that tell you? That tells you that if you take this gamma and split it up into an antisymmetric piece and a symmetric piece. And who can stop you from doing that? Then the symmetric part, sorry, the antisymmetric part transforms like it is. Okay. So you can think of gamma as being composed as a part that is antisymmetric in its bottom two indices. And a part that is symmetric in its bottom two indices. And this antisymmetric part transforms like it is. As we will see, as we go along in our study. Okay. As we will see, as we go along in our study. The symbol of gamma will enter the local equations of motion of many things. Okay. And we want it, we want our theory to be such that we, you know, motivate the kind of motivations that we had at the beginning of this course. We want our theory to be such that it's possible to choose an coordinate system. In a way, everything looks like flat space for a local equation of motion. Okay. So we want our theory to be such that it's possible to be able to choose a coordinate system to transform gamma to zero. If you've got a tensor, there's a non-zero and any one coordinate system. It's non-zero in any coordinate system. Because the linear transformation, you know, the transformation of the tensor is just linear. You can't take something non-zero and make them zero by linear transformation. That is you that the equivalence principle as stated, as we originally stated it, you know, wanting these local equations of motion to be like that in flat space, cannot really be true unless either all local equations don't depend on the anti-symmetric part of gamma, in which case we don't get, we should need to care about that. Or if for some reason or the other, the anti-symmetric part of gamma will see. Okay. Now, the anti-symmetric part of gamma has a name. And the name is, it's called the torsion. A language that's very strangely called the curvature of space. It's a very strange amount of notation, terminology. We're going to use that word for something else. It's almost like two different editions of everything. Change the terminology. Anyway, most people call this the torsion. In the anti-symmetric part of gamma. And we're going to proceed with the assumption. We're going to proceed with the assumption that we're going to try, that we're, that our torsion is seen. We're going to try to, try to find, as you say it means to say, yeah, that it will tell us that when you start doing physics properly, let's say you're introducing fermions and degenerative things. We will find that this assumption is not that. In the presence of fermions, though we may not get to that in this course, the effect of spin connection of space develops at ocean. One that is sort of, sort of determined by what the fermions are doing. Okay. So, though we will not see in this course, this assumption is not always correct. But we're going to make it, and we won't encounter a problem with this assumption. Most of what we're going to do in this course. Okay. How is this justified? What? How is this justified? How is this justified? It's, you know, it's, okay. Let's, let's, let's say it this way. What we want to do is to introduce an object that will make derivatives go, transform coordinate. Now, as we've seen, there's only this, what was the problem? The problem was there's no homogeneous term in the derivative transformation. Actually, as we've seen, the only part of gamma that becomes a complain in helping you cancel this homogeneous term is the symmetric part. You know, any assumption you make on gamma that is so strong that it will not help you make a covariant derivative's covariant. It's a stupid thing. Although we've introduced this in general, suppose gamma were symmetric in its bottom two indices, then it would still be possible to make, make an object that transforms covariant, a covariant derivative that transforms covariant. Because gamma has to have, you know, the transformation rule for gamma, but not spoil the symmetry. We could try to create maybe a theory in which there was also a symmetric part of gamma, and sometimes you would be forced to do it. Okay? But the minimal thing would be, since we can set it to zero, nothing forces us to keep it to no zero. That's interesting. Yes. This is a coordinate invariant. This is a coordinate invariant thing. Because the anti-symmetric part of gamma transforms like a text. Yes. Please don't say that the transformation is linear. Yes. Yes. This transformation is linear in the sense that it's not linear and homogeneous, but it's linear. Yes. We design this. The transformation to do for gamma where the gamma is symmetric and symmetric is this. Let me say this again. You know, this is the kind of thing that you would, if you were trying to build the theory, you would most naturally do this. Because what you want to do is to generalize the notion of a derivative. In the smallest way you can, in the least way you can, so as to be consistent with covariance. Okay? And it's consistent to make the generalization with gamma select. So let's forget about the anti-symmetric part a little bit. Unless something forces us to turn. Is this right? With gamma we define them as the way parallel transport happens. But we don't have an independent notion of how parallel transport happens. We are going to try to define how parallel transport happens in an algorithm. We don't have, you know, it's not like there is a physically correct way, that we have a prior notion of parallel transport, that we're trying to find a formula. We're trying to make a definition of this parameter. Physically what is the law we're excluding when we put this to zero? Physically how do we develop this? No, no, I mean parallel transport. In a visual sense or in a story? In a visual sense. Okay. Look. Yeah, are you trying this? What is the natural choice in what parallel transport is? We're trying to do better than that. Do you want to have gamma to be... No, no. What is this natural choice that does not happen? Move that a little bit. How do you change the orientations? What is that? Okay, so one thing you might say is that the dot product with the GVS with the vengeance button. What about this? Maybe that's a little less. Yeah, we don't. There may be many criteria you have in your mind, and anything reasonable that you have will be influenced by what we have. But I'm saying that one coding, just as a mathematical addition, endows a manifold with a notion of parallel transport. And notion of parallel transport, a prior idea has nothing to do with it. Okay. What we are going to try to do now is to find the reasonably natural notion of parallel transport. Okay? Then we will find the formula for from the beginning. Okay? And we're going to try to do this in a minimalistic way. Yes. But... That's right. Yes. So as you say, once we turn on fermions, we will need to have this kind. Okay? You know, for me and the grungers, you need to work with these high-end lines, these good connections for bosons. Okay? Let me just say it again. What we are going to do is to invent a notion of parallel transport. Okay? That we make the covariant derivative transform covariant. That. For that, what we need is to endow our space-time with some object that transforms space. That's what we want. For this purpose, the anti-symmetric part of gamma is irrelevant. If I choose a gamma, which is completely symmetric, you add something anti-symmetric to it. And make the anti-symmetric transform like that, sir. Okay? That will equally well be a good notion of parallel transport. Okay? But we don't need the anti-symmetric term to cancel any normal space in this transformation. So the minimalist assumption would be that this anti-symmetric part is zero. Okay? I want to emphasize that there isn't an entire notion of parallel transport. We're trying to define it. So we're going to go with this assumption and see what we can now say about that. Is this clear? Any questions about that? We're going to require a problem about our notion of parallel transport. It is that it is what is called metric compatible. That is, that the covariant derivative of the metric tensor is zero. Now, why do we require this? You can motivate this in many ways. You could motivate this in many ways. But one of which is, you know, our only simple motivation is this. Look, suppose we've got a metric. Then the one form A will be transformed to a vector in you. Let's do the other way. A vector in you can be transformed to one form A in you by contraction with the metric. Given the metric, there's very natural correspondence between a one form effect. And you might want this correspondence to be respected by differentiation. Okay? What do you mind? Is that if you take the covariant derivative of A in you, so now this is two tensor, and then you raise this index, g mu zeta. This gives you some b zeta alpha. There's some tensor with one upper index and one lower index. Why is it required? I mean, it seems, I mean, suppose we put the demand. Okay? But this object here is the same thing as the covariant derivative acting on this d of a here completely defined. This is a tensor. This is a vector. And something I leave as an exercise for you to convince yourself of is that our rules for varying differentiation are better than limits. That is the derivative of A times b. This derivative of A times b plus derivative of, you know, A times derivative of b. Okay? So, using that fact, this thing becomes d alpha g mu. Sorry. So, let's see. So, here I had covariant derivative of A mu, the one index raised, and there are, sorry, I wonder if I want to say this is the same as this with the free index here, zeta. Sorry. This is the thing that I want. Okay? But this quantity here is equal to d alpha of g, okay? Sorry. Let me look at this side. This quantity here is g mu zeta of d alpha of g mu theta A theta. Okay? Where d acts in this position. Now, we want to check whether or not this is equal to that. Now, there are two terms here. First term is this d acts on the A's. And the second term is the g, because the limit root. Now, the term of the d acts on the A is exactly what we want. Why is that? That's because we get a g inverse mu zeta, multiplying the g mu zeta. Okay? Which gives us a factor of delta zeta zeta. Okay? So, it just converts this theta index to zeta. Okay? And that's what we want. The term that we don't, the additional term is something we don't want. That additional term is proportional to the covariate derivative of the metric. So, this property of that, it doesn't matter whether you raise indices before or after taking the covariate derivative. If you make this demand, it seems like a reasonably natural demand to make of your notion of parallel transport to make it come up competitive of the metric. Okay? Then, if you make this demand, then you require that the covariate derivative of the metric is equal. And you require that, yeah, all components of the covariate derivative. Okay, excellent. So, now we know two things about the, about our notion of this parallel transport, this, this drama. Firstly, that it's a metric. And secondly, that it's compatible with the metric so that the covariate derivative of the metric is equal. Now that, now this is enough to actually expressly determine what you want is a notion of parallel transport. You see, given a metric, an object with an upper index is the same as an object with a lower index. You can take an object with an upper index, convert to an object with a lower index by just pulling down with a factor of G. Now, what you want is that this equivalence between upper and lower indices commutes with differentiation. So, when you take the derivative of the upper index guy or take the derivative of the lower index guy and then raise the index, you get the same answer. This is the condition that somehow tallies the metric up to your notion of parallel transport. Mathematically, these are separate things. You could have one notion of parallel transport, another notion of the metric and you're not forced to link them. But if, if something good will happen clearly, you know, these two things are linked. So it's a natural requirement for me. Okay, this now is sufficiently completely determined. These two assumptions, the symmetry of gamma and the compatibility of parallel transport of the metric are completely sufficient, are sufficiently completely determined now. That's happened for sure. I think that was the metric. Okay, so let me write down the following three equations. Okay, firstly, I want to write down the equation dL. Okay, I'm going to capital letters because my small l, my small l and small r are indistinguishable. G. I'm getting my K to answer. Okay. This is a true equation. Okay, and I'll write down all the other three. Okay, I'll write... Wait, this is K, right? K. Okay, so now let me write down the other three. So, dK, G, I, L and dL, K, and I. L, K. All these are true equations. I'll take these equations and write them out. Okay? This equation here is del by del XL. Let me write this. Because we're not writing two different coordinates. It's just one. G, I, K is equal to... It's tough with minus sign, so I'll write this equal to plus seven. Right? And what is it? We have gamma of... Gamma of I, let's say M, G, M, K. Okay, plus... I mean, over the entire area of the... With the principle of equivalence, we violated... The principle of equivalence, somehow, we violated. If this was not true. Good question. Let me say what we're actually going to do. What we're actually going to do is write down some sort of a Lagrangian to describe motion and derive the equations of motion from that Lagrangian. Okay? The real justification for what I'm saying is the problem. Now, the equations of motion will turn out to... Will turn out to involve... Will turn out to involve a way in there to build out a G in the way that I'm talking about. So now, your question could be... Your question would be... Is there some way of writing down a Lagrangian where it would involve some other toy? In the absence of sufficient structure, probably not. Basically, because you would need... You know, the most general gamma. Okay? Is the gamma way better right now? In other words, the tensor will turn out to be 3 in the system. And there are no tensors that will turn out to be 3 in the system with only the matrix. As we will see, we will completely investigate the set of all tensors that you can build out of just the thing. So you will need some other structure. You will need some other matter fields or something like that with which you can naturally associate this antisymmetric part of gamma. This is what will happen with fermions, but in the simplest cases. Yeah. Okay. We will take part of the A-V matrix, which means you look for A-V matrix or just the matrix that are obtained by transform... Every matrix. We require this to just be always... arbitrary space time manifold. We want the notion of upper and lower tensors. Okay? So that's why we're putting this as an identity. This formula here must just be true for our arbitrary matrix. I tell this formula with these indices, so here what we've got, here what we've done is exchange the indices KL. So L, K, G, I, L, and what's the rule? Interchange KL, so gamma, I, K, M, G, M, and L plus gamma, KL, KL. Okay? L, K, that means too much, M, G, I. And finally, I'll write down this other formula. But I'll write it down with a minus sign, because we're going to want to handle all these three with a minus sign, so watch me. So I'll write down the formula with D, I, gamma, L, K, or G, I, K. This is simply obtained by interchanging. From the first one, we interchange I and L. Okay? I, K, and L, M. So let's write that down. So that's equal to minus gamma, L, I, M, G, M, K, minus gamma, I, L, so K, I, M, G, L, M. Okay? These are all true equations. Okay? These equations, and I add them all, and this goes like this. I take these equations and I add them all together. So I get L, L, of G, I, K, plus L, K, of G, I, L, minus L, I, of G, L, K. What do I get? So let's see. Let's see. I'm going to tell you what we get, and then after that we'll try to just do that. What we should get is two times gamma, L, K, M. Let's see if this works. So firstly, anything with indices at the bottom that are not L, M, K, you should test. Let's circle those that are L, K. This one is L, M, K. This one is L, K. Okay? Now this guy and this guy are, of course, identical. They add up to give us the jump limit. Okay? What about the guy with I, L? The guy with I, L, because it's symmetric. It's the same as the guy with L and I. But this occurs with the minus sign. So this, this cancels that. What about the guy with I, L, K? Similarly, because it's symmetric. Okay? This is the same as this, and this cancels that. We show that this equation is true. This is right because we've got a formula now with the only gammas on the right-hand side, with the only single gamma up to G. So I think that's formula for gamma. It turns out that gamma of L, K is equal to half del L of G, K, of G, K, M plus del K of G, L, M plus del M of G, L, K. The whole thing raised with G, M, O. This should have been N. This is the solution for gammas as a function of G. Back for a moment and see why we could solve for gammas as a function of G's. Let's do the counting that told us that it had to be the case. How many gammas do we have? Does only count the number of elements component for gamma in four elements. Without the assumption of symmetry, how many? Exactly. Four into four into four. That's 64. Yes. Four into four into four with the right-hand side. 64 without the assumption of symmetry. With the assumption of symmetry, how many? 32. 32. No. 34. No. 24. Somebody do this counting. Come on. 24. No. 40. 40. 40. 40. The number of indices is at the bottom. There's a number of symmetric indices from four indices. There's four into five by two to five. That's 10. The figure at the top has nothing to do with that at the bottom. That's four more. So four into 10. 14 independent components of gammas. How many independent first derivatives of the matrix? 40. It's the same counting. You've got one derivative and then two symmetric indices. So four into 10. So you have 41st derivatives of the matrix. 14 independent components of gammas. And the fact that the covariant derivative of G is zero gives you a linear relationship between these linear relations between these 40 different objects. The 14 objects that are first derivative of the matrix. And the 40 objects that are the gammas. So clearly you can solve for one derivative of the other. But there's an assumption of symmetry. How many independent components of gammas? There were even 64 independent components of gammas. And 41st derivatives of the matrix. And we may have not made the assumption of symmetry and still asserted this. What would we have done? We could have solved for 40 out of the 64 independent gammas in terms of first derivatives of the matrix. But the remaining parts would have been undetermined. What parts would have been undetermined? Precisely the anti-symmetric parts of gammas. How many anti-symmetric? How many components are there at the address? Anti-symmetric parts of gammas. 24. That's right because 64 minus 40. And if somebody is independently re-metallic to me. 6 into 4. 4 into 6. Exactly. The number of anti-symmetric combinations of four and next objects is 4 into 3 by 2. So that's 6. 6 into 4 is the number. Precisely the anti-symmetric parts of gammas would have been undetermined by this condition. And the assumption that that anti-symmetric part is 0 is what allowed us to completely determine gammas. You see by the way that you see what we have basically done is make the minimal assumption. The assumption that the assumption that minimal assumption consistent would have a notion of covariance. That gives us enough power to completely determine our gammas in terms of data. So now the gammas are an abstract object. It's a completely concrete object. You know the metric kind of gammas. So I know what my notion of parallel transport means. I've determined it by putting physically reasonable criteria on this notion of parallel transport. As I say later on in the course it will develop out that this notion of parallel transport endures the equations of physics. We will write out Lagrangians for our... I don't know... ...circularity. Up to some potential circularity will turn out. We'll see as we go. Can we just solve this by choosing gamma and then finding the boundary? Yes. We can find the first derivative of the metric. Good. That's a very good question. So the question, Prathu's question was suppose you take... specify a set of gammas. Now think of these as differential equations for the metric. Then... Can we solve this? Now firstly, suppose I get an arbitrary set of potential gammas. Can you always solve the equation that these are these first derivatives of the metric? No you cannot. Why not? You get those conditions. Those are the things you will impose. What you would do is to say I know I've got my gammas. Treat these as differential equations to try to solve for the metric. If you give an arbitrary gammas you can't do this. Why not? It's much more basic. Let me take a little... Suppose I want to determine a function a by specifying its derivative. So I give you d theta of a. And then I tell you d theta of a is a. There's some d theta. I have to take its derivative and specify d theta. But suppose I specify d theta. Can I always find an a such that del theta of a is d theta? More than the constants, right? It's being... there's an integrability condition. You see? This is four differential equations. Determine one function. That's too much. In general, there's no solution. Take the first of these equations and integrate it. You can get an answer. You can get a second of these equations. Integrate it, you get an answer. How do you know these equations are consistent? The answers are consistent. And the integral of the condition is that the curl of d theta is 0. d phi d theta minus d theta d phi is 0. Which is obvious from the left-hand side. But it's not true for arbitrary details. So if you just try to specify a set of gammas, it will be very simple. Once again, you can try to take second derivatives. You know, both ways. You'll get a large number of integrability conditions on gammas from the streets. Just another way of saying it is that specifying all first... specifying all gammas is like specifying all 40 first derivatives of the metric. That's 40 differential equations to determine 10 functions. It's too much. So that's the type constraint for it to be. But of course, if those consistency conditions are obeyed, then you would be able to use that. You would be able to then solve for the metric. Okay, excellent. Other questions about this? Sir, this is always important. What? We get all system of linear equations. Yes. We can always... We can... That's what we've done in time. We wrote the original equations that we wrote with expressions for derivative of metric in terms of gammas. And by doing this, we're going to be able to do the inverted equations. We're going to do the inverted equations. So we did explicitly inverted. Unless, you know, something is singular here. Yes. One thing about that, that d, alpha, d mean means we're at 0. Yes. It's somehow related to the local factor in itself. D, alpha... You know, the local flatness we were defined in terms of... You know, if you were just a mathematician, you couldn't define a notion of parallel transport. Independent of the metric. With any notion of parallel transport, you would be able to set the metric to eta and gamma to 0, as we would say. I think my answer is not directly connected to it. Not directly connected, but you know, not directly connected to this. I don't immediately see that. That's the same thing. That equation, you have a notion of parallel transport, where it pushes the order to be 0. Then d, alpha, g mean mean means 0. What? In that coordinate system, according to the... If you assume these equations are true. No, no, no. d, alpha, g mean means 0. This one? Yes. Even if you don't connect the metric. No, no, no. A certain... This equation is the assertion of the connection. We've got two different notions in our head. One notion of parallel transport, another notion is that of the metric. Mathematically, they're distinct. You know, there are two different... You could mathematically consider a manifold with a particular notion of parallel transport and two different metrics. This is something mathematicians do all the time. It's not a totally weird thing. Okay? This is the simplest thing. Okay? And, you know, good things will follow from it. And as you will see... Okay, let's try to get that. But as you will see that the action that we started with, that is the action for the motion of a particle, will naturally involve this... will turn out to involve this notion of defensiveness. The natural action of physics, and this we can't. Yes, yes. At least until we start doing the dynamics of gravity itself, this is not certain. We can use certain... We have this nice notion of parallel transport, yeah? Yes. And without going into the resource involved, we can derive the Riemann tensor, Riemann tensor and Riemann tensor together. Those formulas for the Riemann tensor, Riemann tensor and so on, use gammas. But there's no formula for the Riemann tensor, which is given in terms of gammas. If you use this formula for gammas, then you're assuming this connection between... You're assuming the connection between electric and parallel transport. So I take that the... Your formula for the Riemann tensor and so on, the standard formula makes this assumption, because it's given in terms of these gammas. Yes. If there are two sets of matrix, and at the other side, the parallel transport properties are... Yes? So can we independently derive the Riemann tensor and the properties of the spacetime without knowing the... In one case, the parallel transport properties or, for the other case, the GVM? So you're asking, what is the connection between the notion of curvature and the notion of parallel transport? Is this correct? Okay. Now, as we'll see with this definition, curvature and parallel transport are actually connected. Because you have dependent... That's right. So, curvature will be linked to the following question. How does, let's say, a vector change when you parallel transport under a small loop? The answer to that question will be given in terms of curvature. Suppose you have an independent notion of parallel transport, but you use the usual formulas of curvature given in terms of connection, the ones that are exposed. Then this connection between how a vector changes under a small loop and curvature would be broken. It's when you have parallel transport defined through these formulas that the formula for how something parallel transports under a small loop being given by curvature will be maintained. Okay. So, there is a connection between curvature and this notion of parallel transport coming from this connection. But everything good related to parallel transport to curvature will happen only with this notion of parallel transport. Well, for those of you who don't know, curvature, as we'll see, you'll see that. Okay. There are many exercises involving these gammas and the covariate that will build out of them that I meant it would last. But because of the seminar, because of all your excellent questions, we got slower than I had hoped to. So, I'm going to give this to you as exercises. This will be our first problem. Okay. Let me just list the exercises that you might like to do. I would like you to do. Okay. Let me stress that gamma iik is equal to half im and then n is equal to... So, first thing is basically obvious from definition. But... Okay, so this is the first thing. This is obvious. This is the definition. Okay. Show that... Show that Gkl gamma al This is a result for contracting one upper and one lower index. Okay. Now, I want the results of contracting two lower indices. Now, contract two lower indices, you can only contract them through a matrix. So, let's say this is equal to minus 1 by square root of minus G d by d xk of square root of minus G which is already vector. d i is equal to... It's given by a simple form. 1 by demonstrate also that if you have an anti-symmetric two-tenths given a ij is equal to minus a j im because two-tenths are upper indices that's anti-symmetric then d by d x i of a ij d by j dm covariance is equal to d by d x i square root of minus j a i j and check that this is true no matter how many anti-symmetric indices there are instead of a ij you can put a ij 1 j 2 j 3 all the way such that all those indices are anti-symmetric. Then this is always true. And the last exercise is that is for the the sort of divergence of a symmetric tensor. So given that a ij is equal to a j i demonstrate that d i of suppose a ij is equal to a j i and when I write a I mean so demonstrate that d i of a i k of a k i is equal to 1 by square root of minus j d by d x k of square root of minus j then a i minus half d by d x i j a l The last thing as a corollary to this statement show us a corollary to this statement that we need it. That del m del m of the scalar is given by 1 by square root of minus j del m d f square root of minus j g m n del n Yes? This is a good one. This is just part related to the fact that we won't have to talk about this. What do you think about the existence of the you know I mean of course the gravitational force can put a token system. You may be right that that that this would send something a rotating that you wouldn't be able to move to a point in which this thing doesn't start to rotate it. This could be, I would have to check I think This may be right. This may be right. You have to be totally right. Right, right, right, right. Right. This may be right. But you know suppose we were just talking about just talking about the motion of a particle. This anti-selectic term basically would be forced. Basically because there's nothing else you could if the only thing is enough you know when we wrote down the action for the motion of a particle the action was written in terms of the metric. There was no gamma in it. So any equation of motion that we get will be written in terms of the metric. Okay, so now you can ask is there any object any three index object to go for whose indices are anti-selectic? Okay, so you can form out of the metric that transforms like a tensor. Ask for that mathematical question. It's a completely unambiguous mathematical question because we know how the metric transforms whether it's there or it's transcribed. The answer to that mathematical question is no. So just from knowing the answer to this mathematical question we know that the motion of a particle governed by this square root minus g action that we started with could only involve this notion of now. Okay, yeah. I think you're probably right that in situations where the where this torsions turn down it would involve local rotations of the way it can't be transformed. Questions or comments about these exercises? Extremely simple algebraic exercises. If we didn't do that we get familiar with manipulating these thousand symbols. Listen to all of these because these are all results people use as we go along. Okay, so one of these exercises is just to say you see, if you wanted to actually, if you wanted to compute the the divergent, you know suppose you're doing a practical solution if you wanted to compute the divergence of a vector field you might think, well, what happened to you first is to determine all 40 gamma circles and then use the formula that somebody's given me to compute the divergence of the vector field. That's a laborious process. It's laborious but in practice it's also laborious conception because then in your head the divergence of the vector field becomes some complicatedness. Okay, however the laborious algebraic process is something much simpler. It's just this object. You do what you wouldn't do in flat space just take the derivatives, ordinary derivatives of these components, but put a factor of square root g inside and compute center divided by a factor of square root g outside. Okay, now this simplicity of vector fields applies to anti-symmetric tensors of all right. That's what this exercises. Now how important was anti-symmetry? Suppose we're trying to do the same way for a symmetric to index vector a symmetric to index vector. Do you get the same answer? Well, you get a piece that's the same, that's this part but you get something else that's a little more complicated that involves explicitly the first derivative of different components of the entry of the contractable tensor. Okay. These kind of results will be important when we study Maxwell's equations because Maxwell's equations are full of anti-symmetric tensors. This equation will of course be important when we study the gravitational equations themselves because we need to take the covariant derivative of a stress tensor. Okay, one last thing I wanted you to check. Sorry, I forgot the six seconds. That is that d mu of a mu minus d mu over a mu is equal to d mu over mu. That is when we take covariant derivatives and anti-symmetrize on one form it's the same as anti-symmetrizing ordinary derivatives. Okay. And that this is true if for arbitrary number of indices downstairs indices if all the indices are anti-symmetric are mutually anti-symmetric. This is the last thing. This is something like the field strength in Maxwell's electrodynamics which is d mu a mu minus d mu a mu continues to be given by the same formulas in that space. That is automatic equivalence. You don't need the metric to define the field strength. But you will need the metric to get the full equation for both of these equations. Okay, fine. I'm going to take a clip. Is anyone half pressed for time? Can I take five or ten minutes extra? Okay, fine. I wanted to do algebraically without trying to deny for you the equation of motion of a particle moving around in the metric. So that's the algebraic thing was to justify the statement that I've often made. Are there any other ideas in the class? That it is possible to find a coordinate frame in which both in which both the mention becomes eta the Mikowski in standard in eta as well as the first definition of metric vanishing at any given point in space. That you can set the metric to be that you can set the metric to be eta at a point in time is obvious. Why is it obvious? What is the rule for the transformation of a metric? The rule for the transformation of a metric is by acting by a matrix a symmetric matrix acting on that metric and by that symmetric matrix transpose. Is this clear? Is the statement clear? Let me write that. So g filled up is equal to alpha beta is del y theta by del x alpha second del x theta by del y alpha g theta by del x phi by del y beta Now if you think of this as a matrix g is a matrix and this x to y has another matrix then this becomes the equation g is equal to g g g transpose g is a symmetric real matrix. Now all of you are familiar with the theorem in mathematics that a symmetric real matrix can be diagonalized by an orthogonal similarity transformation. What we are doing with j's is not a similarity transformation. It's j not j inverse. It's j j transpose. But when the similarity transformation is orthogonal, it's the same as doing j j transpose. So there are many more elementary ways to justify it but just quoting is something to know. It's clear that at the point we can do a fourth of j to g to y. This part is obvious. Let's take this point to be the origin. So the kind of coordinate change that we are doing take g theta and the origin is a linear coordinate change. What we did was something like x mu is equal to a mu nu x nu because we want the constant matrix to do the diagonalization. We didn't want an ax of one. We wanted to know what happens at the origin. coordinate change that we done sets the metric to eta which leaves the first derivative to the metric average. Now I want to demonstrate that it is simultaneously possible to make the metric eta and also set the first derivative to be 0 at the origin. In order to demonstrate that what I am going to demonstrate is equivalent to demonstrate that I am going to set the metric to be eta Because gammas are linear to the first X, so the metric. Now suppose I first chose a coordinate to set the metric at that point to be eta. What further coordinate change can I do that will not spoil the fact that the metric is eta at that point? But we'll set all the coordinate, all the gammas to zero. That's easy. Firstly, we don't want to spoil. We want to make sure that we don't spoil the metric being zero at the eta at that point. That will be true if we make the coordinate change higher order in excess than linear. Suppose we make the coordinate change in particular in order to be linear, quadratic in excess. Then this derivative matrix, del X by del Y, when evaluated at the origin, will simply give us zero for these additional coordinate changes. So what I'm going to try to do is to find a coordinate change of form X alpha is equal to Y alpha plus gamma plus at the moment A alpha beta beta X beta X theta. Why beta alpha beta? I'm going to look for such a coordinate change. Any such coordinate change will not affect the value of any tensor at the origin because it will involve taking a derivative and then at the end of that process setting Y to zero. So this matrix doesn't contribute for tensors. But gamma doesn't transform like a tensor. In particular, the transformation properties for gamma involve two derivatives of X with respect to Y. So we could try to choose the same to set gamma to zero. So suppose that we had some gamma. How will it transform under this kind of coordinate change? So gamma will be equal to gamma, that's the tensor transformation part. Plus what? Plus we had a nice formula. We had del, okay so let's put in this is alpha beta theta, alpha beta theta plus del X alpha X m del 2 Y alpha and Y beta and then del Y alpha by del X m times beta beta. Thank you. Let's plug in this business. So we're going to imagine all formulas at the origin. So this guy at the origin is just the data function. Maybe because you get a term from here which is the data function. A term from here which at the origin is the data. So this we can just replace by gamma beta theta alpha plus del 2 X alpha by del Y beta del Y. Just to find something that will do this job. So let's plug in what this object is from here. So what this object is is gamma beta beta alpha plus twice A alpha beta. Thank you. Thank you. Thank you. Very simple formula change. Just choose A alpha beta beta to be equal to gamma alpha beta beta by 2. Over the max. So we make this choice. So the coordinate change X alpha is Y alpha minus gamma beta theta Y beta Y theta by 2 sets gamma to 0. It has not affected the value of any tensor field at the origin. So it doesn't affect the value of G was equal to eta. So we demonstrated that we can find a coordinate system. We can find a coordinate change to set both the metric to standard form to enter and the first all first elements of the metric to 0 at any input. Actually there is a stronger result which I will not now try to prove unless we need it. It says that along any open curve you can always set gamma to 0. So you can do more than just at a point. You can do it along any open curve. But at the moment I will not try to prove this because we don't know. Okay, the question is about mathematics. The last thing we are going to try to do in today's lecture is to try to see how we abstract all these abstract mathematical instructions that we have done enter physics in the simple Lagrangian that we started the lectures with. Maybe the Lagrangian that dictates the motion of a free particle under the influence of gravitation. Go back to the formula we first started the lectures with. Now the formula being, we've got a graph, we've got an action which takes form square root of minus m, square root of minus g alpha beta l dx alpha by d lambda dx beta. It is to derive the equation of motion that follows from this action. What it is? So the equation of motion is completely determined by an action. You don't need any notion of parallel transport or anything like that. You can give us some equations of motion. Let's see what it is and then you try to interpret it in the language we've been developing. So, okay, good. So let's try to take this action and vary it with respect to x of lambda and then use the principle of least action to find the equations of motion. Okay, so that's the variation of s. So delta s is where minus m, then we've got something under the square root. Okay, so we'll get a factor of half times the square root of the denominator times the variation of whatever was in the square root. Now, let's write down the variation of what was in the square root. The variation of what was in the square root was minus delta g alpha beta d x alpha by d lambda d x beta by d lambda. Okay, I'm just going to run this out so I can continue this one across the board. Minus 2. Because the variation with respect to each of these will give you the same thing. Clearly. G alpha beta d delta x alpha by d lambda d x beta by... What is this delta g alpha beta? This is the change in g because you've changed x a bit. Okay, so I'm going to write it in terms of delta x. So we write this as minus delta g alpha beta by delta x beta delta x beta. And so that's to ease the equation of motion and change this coming index alpha to the same coming index that we have there. So this is the variation of the action. Now we want to set this variation to zero. But you remember when you have variations that are differentiated in the dungeon mechanics before equating it to zero you transfer all of the derivatives to the other side. Okay, so let's write that. So delta x is equal to minus m by 2 into... Now we can transform this derivative on everything else. Okay, so that's d by d lambda of 1 by square root of this stuff, whatever's inside the square root you know I would write. Times x beta by d x beta by d lambda. That's this stuff. Everything in this practice is going to be multiplied by h times. And the other term is just simple. Times minus... Okay, we have a yellow problem so we're going to switch to orange. Not because... okay. There's no signal. d g alpha beta by d x theta d x alpha by d lambda d x beta by delta theta is equal to zero. As usual by the principle of least action, the only way that this would be zero for arbitrarily delta x theta is from the same branch point of this. So we derived an equation of motion. The equation of motion that we derived is d by d lambda of 1 by square root of minus g alpha beta x alpha dot beta by d lambda minus g alpha beta by del g alpha beta by del x beta into d x alpha by d lambda d square root. I've forgotten here the 1 by square root. That was also there. So that was 1 by the same square root. That's it. Write it in the following. Define a new variable s so that ds is equal to square root of g alpha beta x alpha dot x beta dot d lambda is the proper length of the proper time along the path. What I'm going to do, you see, the point is that this lambda was an arbitrary value. I'm going to get an equation that works only physical things. So what I'm going to do is instead of writing an equation of d by d lambda, I'll always make my equation d by ds. So what I want is that every time I've got a d by d lambda, behind it I want the factor of the square root. Now you see, we already have one such factor here. So if I just divide this whole equation by another 1 over square root g, one over the square root factor, I can combine. That into this. And this guy into each of these. This one was already there for this. The other thing I divide by would combine with this. Okay? So what I've said is that in terms of this new variable s, or maybe I should call it tau, this is the proper time for our physical motion, in terms of this new variable tau, whatever is the equation, d to x beta by d tau square. Okay? Minus, what was this? Lg alpha beta by del x eta, of d x alpha by d tau, d x beta by d tau is equal to z. That's not a devil looking equation. The question that you might immediately ask is this covariate. Do we get the same answer when we use different coordinate systems? Now basically, we've just spent the last lecture on developing a theory of covariate deflection. For variant, it should be expressible in our language. Okay, so let's try to do that. Let's first state, see, quantity here is the same as the form. This quantity here is the same as, this thing is the same as the x beta by d tau, which by definition is equal to the covariate deflection of, so what I want to claim is that this is the same as d to x, beta by d tau square, plus gamma beta by d tau by d tau, d x by d tau. Listen, I want to check that this is the same. Let's see if there is a question. You don't see one time. It's the other time. Okay, let's check what this object is. So this object here, sorry, I missed it. So what I missed here was the metric. Sorry about this, sorry about this, sorry about this. Sorry, I did some algebra wrong. This object here had metric. You see, this had a lower theta index as a free index. This object here had upper theta index as a free index. The thing that we missed was g theta. So what we need to do is, apart from differentiating this guy, we also need to differentiate the g inside here. Sorry, I'm going to back up and redo this actually. Sorry about this. This equation is correct so far, but let me back up and redo this. Now let's do this more carefully. Divide by the square root that we talked about. So I convert this all into d by d x. That part was correct. But now in this d by d s, there's a term where I take d by d s of this g. So the additional term I get from there is a term that will couple with this guy. So this guy was minus half dg alpha beta by dx theta. dx alpha by dx d lambda ds dx theta by ds. Add it to that. I'll get a term which I'd better differentiate this. So there I use the chain rule. So I get plus del g theta beta by del xy. Let me call that del x alpha. That's dx beta by ds. This was already here. That's dx alpha by del xy. What? d lambda. d i now. I combine the square roots. So it's now. Now. Now. Now. Now. Now. Now. Now. Now. Now. Now. Now. Now. Now. Now. Now. Now. Now. Now. Now. Now. Now. Now. Now. Now. Now. Now. Now. Now. Now. Now. Now. Now. Now. Now. Now. Now. Now. Now. Now. Now. Now. Now. Now. Now. x alpha by d alpha theta plus del alpha g beta theta plus del beta g alpha theta multiplying d x alpha by d lambda d x beta. Why is this the same? This term here gets this term, that is obviously the same. And these two terms give the same, sorry theta is the second term, second term, second term theta is the free index. These two terms give the same thing because they just equal to each other under a flip of dummy medians. Beta theta is not like this. So you get whatever you get from any one of these and what you get from any one of these is precisely this. So what have we commuted? We've commuted that our equation here takes the form d by d tau of d x alpha by d tau plus gamma alpha beta beta d x beta by d tau d x beta by d tau is equal to that. This was precisely the rule for covariance differentiation of that. You take the derivative and add the gamma term with a plus. What this is telling you? The other way of saying this is that if you define d by d tau as equal to x alpha by d tau times d alpha. So this is now a very derivative and you talk it with the tangent vector along your well line that this equation is same but with this definition we get d by d tau of d x alpha by d tau. I mean completely explicit. The ideal is this equation of motion for a particle is d x alpha by d tau d alpha of d x beta by d tau. Now this is a coordinate invariant object and since this differentiation here is also coordinate covariant. What this gives you is one vector set of equations and the specter of equations starts from here. What I want you to note here is that this gamma being this formula appeared directly in our computation. In respect of all the other mathematics we did, this is the correct equation of motion. If you define a covariant derivative with our formula it will be things like this that will be apart from questions and comments for the lecture. Close here. Keep it on film. Any questions or comments are there? This question. This equation that both the operative parametrization of the vector is just true. Well if you just realize what tau was in terms of lambda, we had an equation in arbitrary parametrization. So in terms of arbitrary parametrization, let's write it out. So this was d x alpha by d lambda square root of, the square root we don't even interact. It's just over on the side. But the important thing is here is d alpha of 1 by square root g alpha beta x alpha dot x beta dot d x beta of electrons in part from the general relativity system around this. Now we will do the experiment.