 Questions about that? OK, so let's move on. I gave you a little assignment to complete this exercise. People tried it? Yeah. You tried it? Well, again, excellent. So now I wanted to move on today to discussing the basic formulation level We will study in this course for a while. That's the pattern that defines the non-immediate gauge theory. OK, so let's start. You remember that when we were studying these ambient gauge theory, the U1 gauge theory, Maxwell's theories, the theories that we studied in the last class, we allowed this theory to interact with our scalar field. Such that under gauge transformations, the scalar transforms like prime is equal to e to the power of alpha by 12 times the max, that's going to be lambda. And a mu went into a mu of e to the power of lambda. That's going to be phi as well. e to the power plus del alpha del mu lambda. And then the actual d mu phi once squared plus the kinetic delta of the gauge f squared mu was gauge invariant, where d mu minus i e to the power of lambda plus. Now, you remember also that this is the way of producing an action of a gauge field interacting with other things. It could only work if we had a matter action, which, in the absence of the gauge field, had an invariant under U1, a phase-replacient invariance. Now, given this and given the fact, as you know, that it successfully implemented in nature, or that it is implemented in nature, in the theory of invariant invariance. By the way, phi could have been replaced by a fermionic field, that on the side that we don't have a glass-glass thing would change. If it's at that part of the table. So given that it's implemented in nature, there is a natural urge for a theorist, some of whom I'm speculating for generalizations and structures, to ask the following question. Suppose we had instead a matter action that was invariant under some other symmetry group, some symmetry group that was not U1. Can we play the same game for that kind of matter action? For that kind of matter action. Can we try to devise some sort of interactions with a master's gauge field, and this matter action that knows about the symmetry group of that action? This is a natural question to ask. It may at first seem rather un-motivated. It's a question that historically came almost out of nowhere. When Yang asked this question, for instance, and he gave a talk at the Institute for Advanced Study in a minute, his motivation was that perhaps it was going to describe nucleon, nucleon interactions. And almost had stopped the talk in a little bit, because Pauli kept insisting that it made no sense, because this theory just had a master's gauge field. And the actual motivation to perform this study was not a real phenomenon, because the phenomenology was forced. The real motivation is a motivation that often links to theoretical things. That is, you see a structure, you ask, oh, you guarantee a certain matter action for it. And I ask, is this a natural generalization of the structure? Work out its consequences, and then step after working out its consequences, look back to see if it has good possibly had a reasonable physical application. Physicists function in both ways. One guy can just have a phenomenology, just by this structure. You know, the most beautiful moments of physics happen when these two come together. OK. So anyway, so we're going to try to ask this question. We're going to try to look at some sort of matter action. There is a net end under some symmetric root that's not human. So let's step back for a minute, and think about symmetries. So what is a symmetry? What's a symmetry of an action? This is for some S of phi. The symmetry of a symmetry of the action is some transformational. Phi tilde down is equal to, let's say, F a equals, leaves the form of the action I changed. What does that mean? What it means is that if I compute S of phi tilde down, then I look at this phi tilde as a way of finding it. The resultant is equal to S of phi. E actually can plug in this change of sequence. For most actions and most symmetry transformations, what you get as a function of phi will be entirely different as the function of phi tilde that we've changed up. When you've got some function of fields, you've got some function of fields, and some symmetric transformation rule, that has some transformation rule that has responsibility, we call them a symmetry of the action. We come back to the study of symmetries and the consequences of what are entities. Very soon, that's why they end up in the lecture. But at the moment, what we want to do is to specialize in particular kinds of symmetry. A kind of symmetry that's sometimes called a linearly-realized symmetry. When the fields, say, phi a transform as some matrix. This is a specialization and will not be the most general case, but it is sufficient to motivate the destruction. We're interested in that. Any kind of transformation is a symmetry. But in today's class and for what we're going to do today, we're interested in what are called continuous symmetry rules. So there's symmetry transformation rules that are labeled by a continuous parameter. And I have the property that's some value of the parameter. There's m because I make it. So an example would be the rotation. Rotation rotating by some angle and some axis direction. You've got two continuous parameters where the axis points how much you rotate by. And if you rotate by angle equal to 0, you're doing nothing inside it. But of course, we'll be looking at any chance that I do not space that. But that was just an example to give you the idea of what is continuous if you do this. So suppose I've got some class of matrices, maybe by some number of continuous parameters, which have some values of these parameters. Let's call the value of the parameter 0, dimension. Like I mentioned, this matrix becomes the ideal matrix. And in the neighborhood of those parameters, in the neighborhood of those parameters, let me denote this matrix m by e to the power i with t. In the neighborhood of identity, t is a small matrix, because t is 0 when we have identity. And I want to know what I can say about these matrices t. So now, suppose I have the first thing to say about it is that in the neighborhood of identity, m is equal to identity plus i can speak. So that is an infinitesimal version of the symmetric transformation. The identity just does nothing. And t acts on your field phi infinitesimally. So phi n is equal to phi plus i times phi in the neighborhood of identity. The sequence of sub-transmitters and delta phi if you could write the change in phi in the symmetric transformation is this object. OK. I'm interested in knowing what we can say about these matrices, though, what we can say about the collection of these matrices. OK. So what can we say? Well, one thing we can say is the following. Look, suppose I act with an infinitesimal symmetric transformation. OK. And then I act with another infinitesimal symmetric transformation, the result is. So what is the result? Well, you can use the Baker-Candell House Law for identity difference. So you would be able to write, if you wanted to, as e to phi d1 plus d2, some infinitesimal symmetric transformation times something else. And leading on it, that something else will be up to some number of d1, the combination of d1 and d2. That of d is the fullest set of infinitesimal symmetric transformations. This is an infinitesimal symmetric transformation compounded by something. We know that the product of two little symmetric transformations gives us a symmetric transformation. OK. This by itself is one of these symmetric transformations. So this must be enough. OK. So d1, the combination of d2, must by itself be an infinitesimal symmetric. OK. So we write on the set of all infinitesimal symmetric transformations by dA. This index, A, is a which transformation. We could, in the set of all symmetric transformations, we could write, I times C, A, B, C. OK. Now, I've got an F. Weinberg uses the C for this. Most people use F. Since we've been using multiple textbooks, it's a bit hard to get the information straight. Let me call it F. Is this clear? OK. Excellent. Sir, this would be the point why these two are symmetric transformations. OK. You see, this guy gives you a symmetric transformation. This guy gives you a symmetric transformation. But the whole thing is just acting by something. We're acting with something else. Look at our definition of the symmetric. It's clear that if you act once and then act again, you get a symmetric. Yeah. Yeah. OK. So I'll take this inverse times this. That's also a symmetric transformation. OK. So this was me. Is this clear? Yeah. Yeah. It is a order t squared because it's the leading order term for this, for this problem. So suppose I take this on this side. Then what you care is order t squared, order t squared plus order t cubed, so on. So it's an exact state. Suppose I wanted to do, you know, suppose I the fact that this guy is a symmetric transformation is an exact state. What I've done is take the right constraint and expand it into power. If you want, you can expand it into power series in t in the coefficient behind this general. This term is the linear term. This is the leading order term at quadratic order. OK. So this is the whole thing. All orders in t must be a symmetric transformation. We're processing that information and order t squared. So I would be writing a commutator as a linear equation. Are we putting in constraint on f or something? No. But then the commutator of a square order product is giving me a linear order. No. You see. What we would have, suppose I put some order, suppose I put some small term type, so on. OK. This would have an epsilon. And this would have an epsilon squared. These are main two series here. They have numbers that are order one. The smallest, we'll go into this epsilon. Right, right, right, right. OK. That's a good question. That's a good question. So we're going to imagine some basis in the space of what's called gender vectors. OK. Instead of all infinitesimal transformations, instead of all infinitesimal transformations, the things that you put in the exponent, that's form of basis. OK. And then to make it near to identity, we'll make the coefficient of those bases like this one. Not make matrices like this one. OK. But the sts and dv's are some fixed bases. They have no spoil. It's not happening. OK, good question. OK. There are many ways of writing the vector character. One way, you could write it as t2, t1 plus t2 times something else. And that something else, it's a leading term, is t1 positive. So three distinct bakel candles are also OK. There's one of which is this one. If you're not comfortable with this, you could just work it out, expand with us, and work it out consistently the same. OK, you'll find that two-second order with some number here, this one. Is this clear? OK. So what we conclude is that the space of infinitesimal selection transformations, these t's, OK, firstly have not changed the form of vector space. OK, because the leading order, t1 plus t2, you know, this has to be symmetry. Leading order and extra order. Leading order is going to be symmetry, t1 plus t2 plus t2 plus t2 plus t2 plus t2 plus t2. So yes. Secondly, we've also concluded that there's a bracket on the space of this, these generators. This bracket sometimes has a liberate. It's a, you know, the form on the space of generators. And it's a metric that maps two elements of these generators to a generator itself. Yes. Is there a significant range of possibilities? How about the up and down? Yeah, I, you know, I would always follow the following in the next convention. I want, at the moment, I, I, I, I make the statement from arbitrary basis in the space of generators. So I want this statement, whatever it is, I mean, I'm going to change your basis. Okay? Suppose I enter a similarity transformation and lower index would transform, let's say like this, the upper index would transform the case of this. Always, I'm keeping consistent track of the transformation properties and the change of the basis of the definition. Is this clear? Okay, excellent. Now, as we've explained the symmetry transformation that is now done next is matrices acting on freelance. Okay, so these these are some matrices. These are finite-dimensional matrices. Now, it's an obvious property of finite-dimensional matrices that, okay, cyclically summed over x, y and z is zero. Why is this true? Let's check it. Okay? This is just true for matrix multiplication. Let's check it. What is this quantity? This is x into yz minus zy minus zy times x. We write this as xyz minus yz plus xyz but I'm writing this as yzx. I'm hoping to be allowed to do everything for me. Okay? Plus zyx minus xz. One of these one. The cyclically summation this guy is equal to this guy. Because I take the x round here. This is x, y, z. Okay? So these two cancel out the cyclically sum. Okay? Similarly, this is equal to this other cyclically sum. Okay? So I take the x then, I take the x then. Just as a trivial identity for matrices, it's true for any operator product that is associated. In particular, it's true for pangolin. Here I identify that it has nothing really very much to it as a fancy name. It's called the Jacobian. Okay? It's a good name. A b A b minus b a plus a b a. It's true. This is what I did. Okay? So let's do something deeper. This is obvious. Anyone can do this. However, for this symmetry, that's very important. Let's try to work out the consequences. It seems to be true for any three matrices. It's true for any matrices that can be multivariate. It's true more generally for any all I've used here is associated with the product. It's true more generally for any associated product. This is a very, very elementary statement about product. The reason it's not so familiar to us is that when we grow up in school, we learn about products that are commutative. So this is of course true. Well, it's stupidly true because every term is zero. Why do we need it? It's not so familiar because we didn't grow up as children or anything. But in a happy little time. I mean, it's about as simple as a b is equal to b a for numbers. It's that kind of idea. But you see, even very simple facts become sometimes a very deep concept. Okay? That's one of those. Let's try to understand. Actually, before we do that, I'm going to introduce a notion of representation. First, we've got this abstract idea. T-A commutative, T-B, T-B is whatever. Any group of matrices in any Hilbert's face or any group of operators in any Hilbert's face that obey this identity are called a representation. A representation of the central. We will usually be interested in finitely I mean to representation. Finitely I mean to representations. Now, usually. At least for what I'm talking about now. Okay? So finite I mean to representation of this algebra. Just some finite dimension matrices. Very good. We know most of this. It's not your best you do. It's your mind. Okay? Just very definition. The very, you know, notions that go in making up the free algebra already supply you with one representation of the algebra. Okay? So this representation that I want to that I want it to link. See, suppose we've got an infinitesimally symmetric object here. Act on it with another infinitesimal. Okay? Discuss. Give me another infinitesimally. At least. But is it a representation of the algebra in the sense that we have talked about? No. Because the kind of representations that we're interested in are representations in which identity and attribute. Okay? So, of course if we look at delta t let me know. Let me just say this. Suppose I just suppose I look at the following at the following. You could try and make what I said correct by looking at what you get but as that minus something else. Try to make that correct. But let me forget. Sorry, let me erase what I was trying to say. Suppose I look at this this object here. So I've got an infinitesimal group and I act on it from the left by one there. From the right by its inverse. Okay? What does this generate? You know, what do I get from this? Okay? Now an infinitesimal order. So, an infinitesimal order what we get from this we can use beta, capital R. Maybe once we should just let me see what I get at order if I call this A and this B. So suppose I put some epsilon 1 here epsilon 2 here. Let me take this and expand it to you know what is epsilon. I'm just actually keeping the term for the epsilon 1. Here is the leading term in epsilon 1. The leading term in epsilon 1 and then also expand that to leading order in epsilon 2. So what do I get? So from here I'll get 1 plus epsilon 1 G 2 by i plus i epsilon 1 by 2. Okay? From here I'll get 1 plus i at 2 and back at epsilon 4. Okay, good. Let me work at the moment you see, suppose things would complex the epsilon's and also the things. Then I'd break that up into two different selection generators. The real thing actually. So for me these epsilon's are always going to be real. So the most general generator times A5, expand this out first. Let me keep terms that are linear order in epsilon 1. The terms that are linear order in epsilon 1 are what? The terms that are linear order in epsilon 1 are this guy and this guy, but they just can't solve it. There's simply a flex of fact that I didn't have this. This is a symmetric generator times its inverse, so that just gives us k. Okay? The term that's linear order in epsilon 2 is simply this object itself. So that's what we started with. I'm interested in the change. So the leading order in the change is proportional to one factor of this and one factor of this. If we did that one factor of this there would be no change and if we didn't have any factor of this these two things can't solve. So the leading order in that is what? It's T2 times T1. So I have to take all these products. So one term is T2, T1, I did it. I did it. The other term is T2, T1, I did it. And now in terms of identity T1, T2 is that it is proportional up to 7. Now first, to T1 combinatory action of the group on itself infinitesimally generates infinitesimally generates a linear action on the space of generators. You see, we started with some generator here. Again we acted with it acted on it with a symmetric transformation and we got something proportional to what we started with. Now it's this something proportional to what we started with. That wouldn't have happened if we just looked at this simple action. Because the leading term would have been T1 plus T2. So there would have been part of delta T which is just T2 without a factor of T1. This is not a linear action of the group. It would not have come from an infinitesimal version of the matrix delta. You know, phi is equal to matrix times phi. Do you understand? So the simple action of the group from the left or from the right at the infinitesimal level does not generate a representation of the group in the sense that we define it. But this combined action of the group from the left or from the right defines at the infinitesimal level a representation of the group. And the representation is such that T1 acting on T2 is given by the top detail. So let's look at this representation in a little more detail. Suppose I had suppose I had right, so let me let me write this T2 but we did a T1 switch the sign. And suppose I want to know how T2 Tb the logical role of these two objects in the expression I am writing is different. This is the basis vector in the vector space that hosts the representation. This is the selection generator. Is this clear? You know what we get? We get it. By our definition I, F, A, T, C times Tc. This is the fact that we've got a linear action because the action of the generator on a basis vector gives us a linear combination of basis vectors. So then we've got some basis vector say Phi V Phi M. And we have the expression Delta Phi as a column. Delta Phi N is equal to I times Ta M Nm Phi M. We note this as the representation matrices Ta in the Rth representation. In a particular R, there are some labels on this. This is the matrix. And we will read out this matrix as a matrix with the lower index as the first index and the upper index as the second. So that's why it says N. Representation. Every basis element of the algebra has an associative matrix. By how it acts on the basis vectors the name of the basis vectors of the vector system. Another question I'm going to ask is what is the representation matrix for the symmetry algebra? Acting on the basis vector of generators. How does T act, T B act on the basis vectors T in? We've got the answer yet. We've got the answer yet. You see? So this particular representation has a name. It's called the adjoining representation. The adjoining representation. The adjoining representation. A. The T adjoined B. Thank you. Thank you. Thank you. Yes. Now T adjoined B. Now I need to make sure I'm using the invention that the reference uses. By the way, the reference of R. I never used the reference in today's lecture. Chapter 16 in Weinberg's point of view of theory. We have come to the chapter when he used in years when the it's the convention of most natural matrices in the moment. So I can write this as minus F A B. By the way, these F's are obviously anti-symmetric in the two lower indices. Because the commutator is anti-symmetric in the two lower indices. So I can flip the positions of these two lower indices just by changing the scale. So this is equal to minus F A B. That's F B C. And this is T B. Acting on T A as an A C. And this is given by minus I F A B C T C. The only thing difficult about this whole A is to keep track of these things. Yes. Previous or like very old that's a little better. What? A small parameter. No. Okay, I'll compare. It's true to first order entity. That's it. Okay, good, good, good, good. Is this clear? Okay, look. We have this into the bi-tube and some element of it we put the similarity transform separated another element. It's the change in the infinitesimally and we're working infinitesimally. So we want to know what is the change infinitesimally? What is the change infinitesimally in this group element? We found that it was proportional to T B combinator T A. Because of that we have a particular representation of the group where the group elements T B on the representation vector T A. You see, what's confusing about this is that both the role of the symmetry group and the role of the representation vector is being played by the same object or equivalent objects. But in our mind we have to distinguish. So this T B is the generator. This T A is the basis. Is this clear? I've decided. Now in general in general the action of T A R Let me just look at all this the action of just T A. It's given by this form. It's from both sides. So the in general the action of T A R is given by this. And so we identify and so we identify now there's an issue of eyes. Already I I I I I I That's right. You're right. So the action of T itself one way of thinking is that I just look at the action of T itself that would not be an idea. The symmetry change is e to the power density but the matrix thing is just okay then I'm consistent with my notation then I work. This times this necessarily has a line. Why does this have a line? Sir, that means that it also has a line. Because you have been symmetric. Symmetry transformation is equal to right. Well, let's meet it. You're right that the symmetry transformation is equal to the point. This symmetry transformation generates the change i times t on 5. Okay? But now when I define the action of t, there's room for this. Okay? Sir, there's some matrix t here. Okay? Whose action on t a? Well, you say it does have the action t. Maybe it has. Let me do this. Let me do this really carefully. So first let's actually find the change in t. So suppose I add here tb. I add here t a and tb. Okay sir. Wherever I see 2 it's a b. Okay. So now let me see what the change in this object is. So what I get here is i t a. And I'm sending these epsilons. Let's add the epsilons to 1. So you get i t a times i squared into tb. This whole thing we want to write as i times t something. So that tells us that... So let's write this as i times t plus i. Good. So this tells us that tb commuted at t a and better be a linear combination of t's with an additional i. So better be a tb commuted at t a is something of this form with an i. Okay correct. And now how? And now what? Because I'm writing. You see. I'm sorry sir. Just a minute. So the last one is delta t a is equal to f every c. Absolutely. Yeah absolutely. Thank you Magesha. Yeah. Magesha is a beautiful thing. So this delta t a is f a b c. A b c c. Okay. And let me compare with this formula. Sorry. Both of you right. Delta phi is equal to i t r a. And then we get the additional minus. Okay. Thank you. Sorry about the confusion. The confusion. Okay. So this is the formula for the i t r a and then we get the additional minus. Okay. Thank you. Sorry about the confusion. This was. A joint action of b is defined by this matrix t b a c is equal to this. A is to understand what is to try to understand what implication you see delta t. Delta t. Okay. See we've got the space of t. One particular generator of it. Okay. So suppose I have you know I've got a vector of possible generators t 1, t 2, t 3. Okay. So t a is a basis vector the space of generator. Now how do we write these the action of a symmetry? We write that acting on a particular basis vector we get m n phi n. What we want to do is to identify this matrix. This matrix you know in the matrix we don't write the fine set. Okay. So what we got here was that delta t was equal to minus i f a b c t c. So this t c here is playing the role of this phi. Okay. So the matrix here just has this. This is here. So we've got we've got this this object here. And we have this abstract way of thinking that that seems to guarantee that seems to guarantee that this action give us a representation. Of course it's not satisfying until we check the substance. You know check it by hand. Now this action actually give us a representation of the root. What we have to check in order to check that we have a representation group. Now we have to check is that t b a joint or t b c indeed satisfied the group a joint. Okay. So is indeed f b c a t a joint. Now this has no way. This was just a definition of structure. Okay. So with this definition that identity should be true. But now we can just plug in what this is. Okay. We can just plug in what this is here. So some bilinear identity at some identity of bilinear quantities at the f's. Now you can ask where from can I get some identity of bilinear quantities from the f's. And of course all of you know this comes from the Jacobian identity. Okay. So of course I get t a c t from here that we see. Okay. I'll just cyclically sum this. The first object is f a d c, f a d d times t d. And then the second comment that I give is f d. I get the expressions in the f's cyclically sum. So the three of them. If you plug this into here. Every term is a bilinear expression in the f's. Because each t is an f. So this is t times t minus t times t plus f times t. Every term is a bilinear expression in the f's. Okay. And I won't try to do it on the board because it will take me 15 minutes and I'll hear the sides wrong. But I'll request you to check that this expression is precisely the Jacobian identity. You get it from the same point. The fact that on general grounds the group itself generates a representation of its own algebra is born out of algebraic grounds. Okay. By the Jacobian identity. And the action of the group on itself in the so-called adjoint representation. The action of the group on itself in so-called adjoint representation is generated by the adjoint matrices objects here. This is how the group acts on its own general practice. Also very abstract. Let's look at an upload. Let's look at one familiar example. Familiar example is that of SL3. Okay. SL3 is generated by angular momenta. You know momenta are by themselves vectors. So the action of angular momenta on the generators is the adjoint action of SL3. And so in this case the adjoint action. Okay. The adjoint action is the action. The adjoint action is simply the spin one vector representation of SL3. Okay. This is very abstract in general for any group. But for a simple familiar group like SL3. In some language you know this thing. Okay. Excellent. So this is all the abstract. All the abstract lead group theory I wanted to do. Oh, there's one. There's one more thing. Let me get one. Okay. Now there's one other construct that will be very important for us in producing this object. This construct is that of an invariant tensor. So we will sometimes be interested in taking two generators and making a group invariant. Okay. So we will sometimes be interested in taking two generators and making a group invariant of them. That is multiplying by some object. Yeah. Such that the transformation of this whole object under an infinitesimal generator is zero. Let's try to see if we can find such a group invariant for any representation of the object. Let's study this object. Let's study trace of T8. Let's study what happens when we look at trace of T8 in the representant. So now I take this object and I act on it with the action. Okay. I act on it with the action of an infinitesimal generator. I know how generators act on piece. Okay. So what do I get here? So delta of this is under let's say Tb. Under the action Tb. So this is T8, T8, everything is summed. So here I had plus here I had T8 and then the same thing as this. As we would probably expect. So what do I want? I want... No. Actually, this is actually what I wanted. Sorry. Okay. So this object is this. Now let's look at the trace of this object. Sorry. Let's look at the trace of this object here. This object here has vanishing trace if the following is true. If... Oh. And where am I? Did you see? Sorry. Just one minute. Yeah. Let's see. Wherever we... I'll go to F, A is downstairs. It's good I just compute this. Have I got my message right? Yeah. Yeah. Right. Right. This is not just possible. Yes. There are no two places to place it. I'll tell you what. Let me do the following. Let me do it. G A. Let me put an arbitrary metric here. Okay. That's actually... Suppose I've got G A here. Okay. Now what do I get? I'm going to say this. What? C. Ah, you don't want to be... Yes. C. Sorry about that. How does this look? Sorry. Sorry. Sir, just to introduce you to the GACs, if you are into this 19... What now? We don't yet know what up and down is. Okay. Let me just put an arbitrary GAC here. I'm sorry. I shouldn't... Let me come in. Let me just work out the strategy. Okay. So what do I get? So what do I get here? Well, here I get TA, commutator, TB, TC, GAC. Okay. Plus TB, commutator, TA, TC, GAC. Okay, good. This object here, up to some over-on lines and so on, is TA, F, B, C, C. D, T, D, GAC. And this object was plus F, B, A, B. Because the following was true. Suppose the following was true, that there existed a metric, such that if I use that metric to raise all the decisions or to lower all the decisions, then F was not anti-symmetric, just in the two-bottom indices. But in all three indices. Suppose that was true. Okay. Then this object, then the trace of this object will be zero. Because under trace now, so now we raise this object here. Okay. We raise this CA with an A here. And we've got TA, TB. And here we've got, we raise this object as well. So we've got, we've got this object here. Yeah. Let's say that all of these are mixed. Okay. Then we've got TT minus TT. Because of the anti-symmetric, because we assume the anti-symmetric of the Fs. So this would be the trace of commutator. And therefore we've got that. Is this here? When raised by G. Or it's equal to the F with lower by G. Okay. Suppose I exist in a G. Okay. That such that F when raised to the lower by G. Okay. Would make this F completely anti-symmetric. Okay. Then the trace of this object would be invariant under group I. Probably to try to prove here, but I'm just going to cite about the theory of groups. Now, the following is group. You know, all groups in the world can be divided into compact and non-compact groups. Groups are groups such that the group manifold itself is compact object. It doesn't go off to infinity. So an example of this is SU2. It's a space of all rotations, which as you know is a three-spill, or double-double. I mean, any other case, SU2 is a three-spill. On the other hand, there are non-compact groups. Where the group manifold itself is non-compact. Example of this is the low-edge group. But the set of boosts, you know, increases without a part. Next group is like SO2, 1. Sorry, SO3, 1. And if you draw it to some sort of group manifold, um... I think it's precise. But anyway, yeah. If you draw it to some sort of group manifold, um... This group manifold will just... If we draw it to some sort of, you know, some sort of space, this would be more like hyperboloid than a sphere. The result I'm going to code for you is the object. Therefore, every compact group is always possible. Okay. It is always possible to find the basis. Um... It's always possible to find the magic. It's always possible to find the metric. Such that, by raising with this metric, F becomes completely anti-scientific. And when you do this, when you do this, okay? Uh... For such groups, it's also always possible to find a basis for the space of the algebra. Such that the metric is just... The GAB is proportional to that. No, I'm not telling you why they should be. I just quoted the result. This is a result from mathematics. Uh... There is a nice, simple proof of this result in the appendix to, uh... a Weinberg book. Okay. Uh... I'll ask you to read it. We won't go through the proof here. Okay. So, don't worry about trying to understand why this is true. First, make sure you understand what is true. The class of symmetry generators is called compact groups, lead groups for compact groups. Okay. For which this statement is always true. Okay. That it's always possible to find, uh... to find a metric. And moreover, it's possible by a choice of basis to put this metric into the form delta A. In fact, the more general state that's always possible to find in such a metric is more generally true. Okay. So it's true, not just the compact groups. The thing that is special about the compact groups is that it's always possible to convert the form delta A. For the non-compact groups, it's always possible to put it in some similar form, but where some of the identity of this delta are minus. So it's always possible to make it, you know, 1, 1, 1, minus 1. But for a compact group, it's always possible to find... find such a... you know, find a metric and find a basis in which this becomes identity. This is the claim. It's a claim that I'm not going to try to establish in this class. Okay. This will be very important for us when we start building our theories. Okay. That's it for my incompetent foray into the theory of the algebra. Let's move on to physics. Any questions or comments on this? So now that we've tried to understand these symmetric groups in some way, now that we've tried to understand these symmetric groups in some way, when we're going back to the problem that we started up, maybe we've got this phi and we want to understand how to build a theory of phi that has some... that has invariance under the symmetry algebra does not just be 1, but we'll do some master's teachings. Okay. So now let's imagine that... let's imagine that the kinetic term of our theory took this form. Before we got the teachings. GAB phi B del mu phi where G is the same metric that we discussed before. It has to be the same metric that we discussed before in order for this thing to happen invariance. Oh. This would be the same... No, sorry. G would be the same metric we discussed before. We've got these things, but in general, these are any kind. This is just some GMN such that this whole object was invariant under the symmetry transformation of interest. For instance, the symmetry transformation is SO3. Okay. And these phi's were in the vector representation of SO3. Then this GMN would just be exact. There is some action that is invariant under the symmetry under the symmetry algebra, the global symmetry algebra that we were interested in. And now we want to try to promote this action to an action that is invariant under the gates of the symmetry algebra. So, now, what we want to do is to look at the first, the transformation to a for the global group as we discussed into the bar I that's called this lambda times phi. The lambda or let's in general say some u. So the finite one. U times phi. U times phi. Where u is some symmetry generator which does not depend on the points in space. This is supposed to that was the situation. Now, we want to know, suppose I take u and make it u of x. Actually, that was the case in the theory of the electromagnet. Okay. Suppose I take this u and make it u of x. How do I preserve it? So let's see how the important question is what happens to d mu u of x. Okay. So this object here is equal to u of x plus the second term plus d mu u of x. Object here was invariable to u being independent of d mu phi goes to u of x times d mu phi. This would preserve in that because you pull the u of x outside the denominator. It doesn't matter that it's function of x. This term however is dangerous and in general will break the invariance of the action under the symmetry group. However, what we're going to try to do is to imitate what we did with the electromagnet. We're going to try to generalize this derivative to covariate d mu minus i a mu. So imagine that we take the action to involve not little d mu but capital d mu acting on phi. Obviously, a field should transform in order that it cancels this transformation problem, this funny transformation term in the transformation of the derivative. It's sort of clear that suppose a goes to universe phi del mu is del mu u of x c, yes, u of x times u of x and i of x. So I'm going to cancel this and minus what do you think is this to be the transformation problem here? Then you can see that d mu takes this whole thing acting on u times phi of x. Now this d mu, as we've done before, we break it in two terms. One that goes through on the d mu, on the phi and the one that acts hits the u. So we get a total of three terms. We get a total of actually four terms. So what do we get? One term is when this d mu goes over to the phi. That's just u times d mu. One term is when we get this object acting on u phi. And you can see that that is simply plus u of minus i u times a mu times times phi because the u inverse here hits d mu. Okay? And then there is the last two terms. The enormity is done from here and the term with the d mu hits the u. And you can see that they are against each other. They are designed against each other. So this norm of d mu is done. It's the u, the u inverse is killed. We've got minus i times minus i so minus sign times d mu of x. So that's minus d mu of x which is precisely the plus d mu of x when the denominator hits the u of x. Okay? So under this transformation rule, under this transformation rule cover the rule that d mu phi of x goes to u of x then d mu of x. We had gauge field along with if we had gauge field along with the we are not gonna do that. So if we had a gauge field along with our actual depuis and the gauge field had this transformation problem. Then if we had gauge field along with the curriculum the gauge field has this transformation problem. And this kinetic term d mu phi m d mu phi n g m n would be gauge invariant. Provided that the original action was gauge invariant and the global transfer. Okay, that's nice, but it's not sufficient to construct, to construct the first of a full interacting field theory based on the symmetry group. Because the full interacting field theory needs a dynamic term, not just a matter of fields, but also and most importantly for the gauge fields. So what we need is some sort of dynamic term for the gauge fields that's also invariant under our transformation. Since you will say it in general, well, yeah, what is it? Well, some of these phi m should be some of the phi n stuff. Now, all I've said is to say that this object here, now I'm saying that we will find the whole thing that will be used in general complex. Well, used in general complex, yes. No, no, no, no, it's true, it's true, it's true. But you see, okay, this is a matter of notation. But you see, suppose you take phi and phi star as the two sets of different variables, then the generators act on phi star with a minus sign compared to the generators acting on phi. Okay, so that will come. Well, look at, let me look at specific theories between them. So this is like, this is an implicit variable. This is implicit in the statement, that this whole action was invariant under global transformations. Okay, so I'll start by writing it. That's it. Not that, that is good. Okay, let's move on. So, now what we want to see, I don't know how to go forward with what an electromagnetic measure was, if me and you are doing something like that. Okay, and what do I originally think of trying to do? It is just defining an f mu, which is d mu a mu minus, by the way, notice that this, notice that this a here, under global symmetry transformation, transforms in the adjoint representation of the h group. And the u's that are not functions of x, transforms in the adjoint representation. This, if you make a statement, would be transformation under the normative. So a is, live in the Lie algebra of the h group, and under global transformations, transform in the adjoint representation of the h group. Is this clear? So, in particular, this transformation rule implies, that if we specialize it to, to, to u of x's that are not functions of x's, in particular it implies that a is transformed in the adjoint representation of the, of the h group. Okay, so these, these a, these a mu's, can be written as a mu a mu x a mu x a mu x a mu x a mu x a mu x a mu x a mu x a mu x a mu x a mu x a mu x a mu x a mu x a mu x a mu x a mu x a mu x a mu x a mu x a mu x a mu x a mu x a mu x a mu x generate. So each x should be replaced. At least the term B-mutor half-concepts are joined in representation of the value of the representation. Exactly. And that's what we've been doing. Exactly. So, now, Mangesh's question is, how does D-mutor solve it? Now, exactly as Mangesh said, you see, the transformation that we, the transformation law here, was that D-mutor transforms as a U-demu U-inverse. This is acting on U-tensile. It would transform like U-tensile. So, this object here has been devised so that this covariate derivative transforms in the actual representation. So, the field strength would play the role of the F-smallness that we add into the covariate. Yes, yes, yes, yes, yes, and all that. So, but suppose we try to develop it. Suppose we try now to generate, to define a field strength that was just D-mu-a-mu minus D-mu-a-mu. And check if it had some nice transformation property. The answer will be that it does not. But we would like some homogenous transformation property somewhat like this. But this does not have a nice transformation property. Let's check it. So, under the gauge transformation, this goes to D-mu times U-a-mu-inverse minus i-D-mu-mu-a-mu-inverse minus D-mu- It happens in this expression. See one good thing that happens is a derivative of this symmetry. But there is the other actions. There is the action of D-mu on this map. There is the action of D-mu on this U and this U-inverse. And of course, D-mu and A-mu is what we would want. is what we want. So that's like the homogeneity stuff. But there are three other objects. They're impact on this guy, they're impact on this guy, they're impact on this guy. And you can try your best to fool around with it, but you see nothing particularly nice. This is the end of trying to construct a field strength just by the simple generalization of the URI construction. It doesn't work. However, we don't need to try to guess to generalize it to something that works. Because we've implicitly already found the answer. We've implicitly already found the answer from Mangesh's observation that operate a DMIU acting on any object already transforms into ice. Now, since DMIU transforms into ice, the correct way to generalize the U1 thinking goes as follows. See, suppose I had DMIU minus IAB. Suppose I, let's go back to the U1 case. Suppose I DMIU minus IAB acting on a field, think 5. Think of it first by DMIU and then think. Suppose I act with DMIU minus IAB times DMIU minus IAB minus the other way around, minus DMIU minus IAB. Think of what I get when I do this. Just check the answer. So, there are two terms that obviously give us DMIU. There's DMIU DMIU acting on 5, that's here because it's electric. So, this gives this. And there's AMIU in U acting on 5. That's also 0 because in the U1 case, there's two numbers, it doesn't matter what we get. So, of the four terms here and four terms here, two of them cancel. Two of these cancel and two of these. What remains is the constants. We don't have all derivatives acting on 5 and you don't have no derivatives. What remains is the term in which, what remains is the term in which, okay. And then you can also look at terms when you have two derivatives acting on 5. Those also give you 0. Can you see that? You can have AMIU tell you and it appears in the same sign but minus because of here with AMIU. Let's go back. Okay. Suppose I have, I look at all the terms with one derivative. We get here minus I AMIU tell you and here minus, let's say minus I plus AMIU tell me. Okay. And then clearly the same thing from this side. This is a method. So this cancels that. So what are the terms that cancel? Any term which has both derivatives acting on 5 cancels. Okay. Any term that has these two A's with no derivatives taken anywhere that cancels. So what's left? What's left are the terms with one derivative acting on, with one derivative and one A. The derivative of acting on A, those are the only terms that are left. Okay. Yeah. I'm dealing with the U1 case. Okay. So those are the only terms that are left. You see two derivatives acting on 5 which act cancels. One derivative acting on 5 which act cancels. So the only term that's left is 5 itself with no derivative acting. Okay. If the derivatives don't act on 5, then what could happen? Okay. It must be that this derivative acts on A and this derivative acts on A. That's the only remaining term that's left. And you can see that that is proportional to, that is proportional to Fb. So another way of thinking of the field strength in the U1 theory is as the commutator of two covariant derivatives acting on a field. It's a wonderful fact that the commutator of two covariant derivatives acting on a field is simply proportional to the field strength acting on that field. Okay. So since we have a nice covariant derivative in the non-immediate theory, since we have a nice covariant derivative in the non-immediate theory, that is the way we're going to generalize our field strength by generalizing this acting. What we're going to do is to say in the following. You know what I'm saying? Given this covariant derivative which was d mu minus i d mu. Okay. We're going to define our field strength by the following identity. d mu minus i d mu times the commutator of d mu minus i d mu. Then we're going to form some field phi. It's going to by definition be F mu mu acting on phi. But this definition makes sense. It must first be that all the terms with both derivatives acting on phi, or terms with one derivative acting on phi on the left-hand side of that object, vanish. But the check of that is exactly the same as what we did. The algebraic check that we did for this term vanishing in the Rewind case also works in this case. Okay. So this definition is sense of the derivative. So firstly it's sense of the derivative. Secondly, it's guaranteed to give us an object that transforms in the correct way. Because we know that each of these d's transform under wave transformations like u, d, u inverse. So products with these transform like u, d, u inverse because u d, u inverse, that's u d, u inverse. These two cancel, both the transforms like a u, d, u inverse. Okay? So this definition is guaranteed to give us a definition such that under gauge transformations, F mu mu goes to u, M mu mu mu mu mu, is the right way to generalize the notion of the field strength to the non-qualities. Now, all that remains is to work on algebraically what we get, okay? At first you might think we're going to run the contradiction because all we're getting is what we got in the U1 case, but that's not true. It is true we get the terms we got in the U1 case, okay? It is true that we get the terms that we got in the U1 case. I think I will... Yeah, please. I mean, I need a minus i. That's the commutator. Now, let's look at the logic again. What I wanted to do is to find a notion of the field span in the Abelian theory. A definition of the field span that would nicely generalize to the null. Okay? One definition was d mu a into minus b, d mu a. That was a nice notion but it didn't nicely generalize. So we looked for a slightly better notion. So we found the most sophisticated notion of the field strength in the Abelian theory. We found that in the Abelian theory it was true that d mu commutator of d mu was the Abelian theory. And then we said, well, this is a notion that will nicely generalize because the covariant derivative in the null Abelian theory has by construction been something that transforms nicely under each other. So if we generalize this definition, then that's guaranteed to give us field strength that will transform homogeneously. In fact, in the actual representation of the gate rule. That's the definition of the field strength. There's the object. Okay. And now to complete our construction, now to complete our construction of a real theory based on this non-Abelian gate rule, what we did is very simple. We simply look at f mu mu, let's say, e f mu mu b g where g was this invariant metric that we invented on the space generators. And add that with the usual minus whatever. Okay. Now we put a minus here. Here we had a minus also. Yes. Oh, please, please. I'm sorry, I'm sorry. I don't know how to read this. Okay. This now gives us an action. That's a problem problem. It's invariant of the gate transformation. It has a kinetic term for all things. Okay. So after this, Okay. This is strictly the adjoint. This is strictly here. Yeah. Yeah. These are adjoint indices because we know that the gauge fields in a particular field strength transform in the actual representation. Okay. So any field strength f can be written as f mu mu a times t a. Okay. And we write in the adjoint representation. Okay. And, oh, it's just f mu a. These are just generators of heat. Yeah. So f mu a times t a. Actually in India, it doesn't matter. Okay. Yeah. This is not specific to the entire principle. What? This is not specific. This is not. But if this object here f mu mu can be written as f mu mu a times t. It's really because both a and computer a can be written as f mu mu. Okay. And now the action will be generated by commutation in the gauge field. The action of transformations will be generated by commu... You want to know how does the infinitesimal generate a t b action, so to speak? It acts on f and t b commutator with f. I don't need to say what representation this t a is written this way. I have not chosen, you know, the coefficients of elements in the actual representation. So let me say that any object that is in the actual representation is by definition alpha a times t a. That statement puts it in the actual representation. Puts the space of alphas in the actual representation. I don't need to say anymore about what representation those t a's happen today. Is this correct? Yeah. Written this way, there's no t a. These a's are just indices that tell you about the transformation properties under the statement. Is this correct? You mean t a squared with the derivatives of what a's happen? What? What a's. These could be generators. These are generators of this group, but it could be in any representation for the states. It could be in the fundamental representation. It could be in the actual representation. It could be in any representation. You see, these f's, f a's are defined to be the coefficients of t a in the expansion f is equal to f a times t a. I don't need to say what representation. Because the action of f is communicated by t b. In whatever representation, write f a. To say it another way, this is now an abstract group theory. Is equal to f a t a because the generator is in the group. And the action of the group on this is in the actual representation by combination. Because it acts like u f u. I don't need to say anymore. This could be just an abstract. Is this correct? But since f mu over here has been built up of the derivatives of the A mu and A mu use themselves which are the transformational representation. No, wait, wait. These A mu's here. Acting on any particular matter field. Had this expression t A mu A t a. But t a is for the representations for that matter field. But you see, we could have a theory with three different matter fields. Transforming three different representations. So if we were constrained to build a kinetic term for the gauge field in which the representation in which the matter appears, it would be in trouble. We would not be able to write such a term. But we love those terms. This is a very important point. The gauge field itself transforms in the adjoint representation. Independent of what the matter is doing, the gauge field can be written as f a t. It is then a certain transformation. Transformation by commentator with t b. And this transformation property is independent of what representation the t a is going to be. So the gauge fields transform in the adjoint representation. Therefore, gauge fields can be written as f a times t a where that t a isn't absolutely any representation. Is this clear? I will very quickly clean up about the work. So where are we? So now we have got some sort of non-imperial gauge theory that we can that we can that we can use as a basis for some mathematical decision. So what is our action? What is our mathematical? As is the aim. And by the way, now that we've got this theory we can look at it even in the absence of matter fields. Just like because the actual aim is in the absence of matter fields. We use matter fields as a motivation to build up the statement. So let's write this as an exponential of i times minus f mu mu f mu mu this a 2 by 4 g. We usually use another dimension. That's 2 by 4. And just let me look at it again. Let's start with our path integral and once again try for a tablespace interpretation of this part of the general of j times y squared. So we're going to do the same things as we're going to do basically I mean exactly the same things as well. But there are two interesting twists to what happens on the inner of the case. The first twist goes as follows. There's this metric GAB. Remember the idea that for compact gauge groups it was possible all the ways to make GAB problems. Or it was possible to make GAB a diagonal matrix with 1's or minus 1's. But for compact gauge groups it was possible to make it with just 1's. Now what happens when you make it with just 1's? What it means is that all kinetic terms have the correct size. All terms have a i dot a dot in a square with a positive plus. If this GAB has some negative signs then for those bees we will get the wrong sign ahead of it. Now this is exactly the kind of thing that we discussed in the last class. It's not good. It leads to unbounded energies from leads to problems with unit activity. Again this whole trick of generalizing to the study of non-annual gauge theories only works. We're able to find a metric of of of ABs that is positive difference. And as we discussed before that only works for compact groups. Though for formal purposes it's sometimes useful to study gauge theories based on compact groups. At first approximation, at first unless you do something specific you don't think, if you don't think too much certainly for applications. We should restrict our attention to gauge theories based on compact groups. That's the first thing. The second thing is more simple which should also be okay so let me now imagine that I'm working with a compact group and that I'm working on a basis such that GAB is proportional to LTIP okay and F squared means F mu nu A F mu nu A everything contracted to it. Just to make like an approximation. What about the next? The next thing is the Hilbert-based interpretation of this model. So we want to imitate our discussion of the of the of the millions. In the imitation of our discussion of the millions theory what do we find? We find that the Hilbert-based interpretation is given by the following hybrid object. The Hilbert-based interpretation is given by a path integral of A0 this is A0A times e to the power minus I I'm HD What is H? H is having to interact with a Hilbert space that is made out of functionals square integral functionals A, I for all things nice but what is H for particularly? H for particularly as before so more particularly what is what is this object? Well first I got the Lagrangian the Lagrangian what is F0I F0I sum over implicit the whole thing squared by 2 minus FIJ FIJ whole thing squared and whole thing by 4 no change here from the Lagrangian we'll just write it in the series okay and then what is the canonical momentum with respect to A0 IA the canonical momentum is simply F0I as before with more momentum so the Hamiltonian as before is equal to AI dot F0I minus the Lagrangian before what we're going to do is to write this in terms of the canonical momentum the canonical momentum was F0I F0I was del A0AI minus del IA0 minus IA0 this is what this was our definition of the page now 300 in terms of majority as it starts to everything has a right right thank you so this was now the only question is in plus or minus ah yes so if we written it in terms of matrices it would be this I just have to pull out the coefficient of I mean, it's a difference it was del mu A mu minus del mu A mu minus I A mu then we substitute in this A mu is equal to ah A mu A T A and then we use the commutation relations so this guy these are all probability A this guy now is here so this would be A mu C and then this is F V C A A with an I plus plus plus F V C A times A0P is to write this object as this plus this minus it so once we do that this F0I F0I will combine with the F0I F0I and the Hamiltonian so our put Hamiltonian here will be F0I ah squared by 2 F ij by 4 ok and then we will have we will add it and then subtracted this object so plus ah del I A0A minus F V C A A0B A is A I C ah times F0I and let's raise the commutation what do we get is it now the integral of the integral over A0A other than action wave functions ok is the generator of shifts in A I so all of this does is shift A I with this quantity and integrate over that what this does is takes psi of A I A and then has that quantity plus del I A0A minus FV A A0 ah B A I C so this will just be like a shifting of like when we do a Gaussian integral put another like we do just do a shift like change of variables to ah this is our change of value it's like the projection for we have to identify what this projection is. Now this you can check is simply A I A times plus Di of A0B with the covalent derivative acting in reaction representation I see reaction of an infinitesimal gauge transformation on the A I fields how do you know that when we know how A I transform under finite gauge transformations we know how A I transforms under finite gauge transformations all you have to do is to take that finite action and specialize it to the infinitesimal case and you pass once again the inverse base interpretation of this problem is very simple once again the inverse base interpretation of the problem is very simple one it's projecting the wave function onto the gauge invariant sector it's projecting the wave function onto the gauge invariant sector my gauge invariant is now defined by gauge invariants the correct gauge invariant for the non-variant any questions or moments of this I'll let you go 3-4 minutes any questions or moments of this ok now in the last 2-3 minutes 3-4 minutes of today's class we discussed let me say it anyway we discussed an issue concerning these pathet techniques these pathet techniques involving non-variant gauge theories an issue concerning them that came up last class I think if you were uncomfortable about the gauge invariant sorry do you have something going on here so in the last bit of today's class I want to discuss the pathet-proport procedure let me do the problem with this with this so we got this pathet technique actually ok so where are we so what we've done is very briefly survey the 3 basic kinds of pathet techniques we'll encounter in this class pathet techniques of scalar fields pathet techniques of gauge fields a billion is where there's not a billion and pathet techniques are over for me ok problem set for you by next Friday give you some basic manipulations with non-variant gauge theories to try to overcome the gain also with some programming techniques ok so we've spelled out the basic grammar of what's going to happen in the class we need to do a little more formal analysis of the gauge theories I want to discuss this pathet-proport method I want to put a little bit of calculation as possible using pathet tables in gauge theories and also the BRST point quantization the BRST Hamiltonian gauge theory after we we go through that we start actually we start using this physics we start doing more physics oh yes, this Friday is in the magazine can't we be arrested for doing that what is your schedule if we need beta classes when can you we may have a afternoon what about Tuesday afternoon Tuesday afternoon what about Wednesday afternoon Wednesday at 2 o'clock I'll have to check availability of this this class would be really useful in this class could somebody take charge of making a beauty list what if you are in the student room who's not in the theory student room who here does not see okay it's a one person somebody who sits in the theory student room can you create a beauty list on the edge of it okay, so just so that we have to change something it's easier to send out so this class no, I'm this class we couldn't try this class, what is the case this class I don't want to say it I'll have to confirm availability Wednesday afternoon Wednesday afternoon I just have to check various things that is the same we'll provide it we'll try to