 compare SU 2 and SU 3 and slowly take you to the abstract Lie algebra, simple Lie algebra. What is simple Lie algebra? It does not have non-trivial invariant subalgebra. So, I will take you from what you know and then we get on to SU 3 compare and contrast. Some bit we did in the last lecture, but today we will compare and fix the root and weight vectors ok. So, this was the first thing that we all write the raising and lowering operators in SU 2 algebra, we do not put that plus or minus 1. So, it basically shifts the state j m by unit of 1 ok, that is the operation of this raising and lowering operators, you all know this. Similarly, we have for SU 3 the lowest non-trivial dimension is 3 cross 3, Hermitian traceless generators number of independent real elements is 8 and out of that you can form 2 diagonal generators. SU 2 you could form 2 cross 2 matrices with 1 diagonal generator, here you can have 2 diagonal generators. Someone came and asked me why is this unique? So, the way to see it is that if you take this lambda 3, this part the 2 by 2 part is a subset sitting inside the SU 3 ok. So, the whatever you did for sigma z poly matrix z component that will be sitting inside SU 3 also. The next one which you can construct should be such that that part should be same ok. So, it is like an identity operator here, but then you have a non-trivial value. So, that this adds up to 0 overall normalization is not important, but it is important in a sense that if I want to associate these numbers to some values like charge or other quantum numbers, these normalizations you can write it out later on ok. So, these are the 8 generators of SU 3 and 2 of them are diagonal. So, any state if you want to write belonging to the SU 3 rep, then it will be a simultaneous eigenstate of lambda 3 and lambda 8, you all agree? So, SU 2 SU 3 you can have fundamental j equal to half, m equal to half and a log of this j I am going to call it as lambda here and explain what that lambda is. I am going to call this as a mu vector. This is also a vector. We put this j equal to half in hindsight. The way we put this is that if j plus on m equal to plus half, what will this be? 0. Then we call m equal to half is the highest weight state and this highest weight state is what we denote it as j. Anything which is any raising operator cannot take it further above, that highest value is what we call it as j and we put that j value here. You do not need quadratic estimate in group theory language to write the j value. j is the weight if you do a raising operator j plus it is going to be 0. Similarly, here mu is important, mu has component mu 2 and you have the lambda 3 operator which is diagonal which I have called it as this is what I called it as h 1. So, h 1 if it operates on mu vector, what will it give me? It will pull out the first component of this mu vector as mu 1 times mu vector. Similarly, h 2, h 2 is what? h 2 is lambda 8 by 2. h 2 on mu vector will be mu 2 times mu vector. mu 2 is the second component of the two vectors. So, there will be only two components for SU 3. So, here m is a number, it has only one component, weight vector is also just a weight, reason is only one diagonal generator. So, that is why therefore, m vector is just one component. Here, h 1 and h 2 are two diagonal generators, two diagonal generators. So, your mu vector will be two component vector and this two diagonal generators are sometimes we call this as rank of Lie algebra. So, rank is 2, number of diagonal generators is what we call it as rank, for SU 3 it is 2. So, here rank is rank is 1 and here rank is. Now, tell me how you will extend it for SU, you can do SU 4. So, you will have a mu vector, you will have h 1, h 2 and h 3 are diagonal generators. So, any commutator of h i with h j will be 0. Here, it is h 1, h 2 is 0, then any i and j which is 1, 2 and 3 they will come out. So, mu vector if you operate h i, h i is either 1, 2 or 3, mu vector is how many component, three component here. So, this implies mu vector will have mu 1, mu 2, mu 3 clear. So, this will tell you that h i on mu vector should be mu i times mu vector. I have still not said about this lambda, it should be kind of clear to you just like j plus annihilates it here. You should have the remaining generators, here there are 8 generators, here there are 3 generators. One diagonal generator is gone, the 2 other generators they you write a complex conjugation of each other right. We write here the remaining 2 generators of diagonal generators. What are they? j plus or minus j which is j 1 plus or minus i j up to some normalization ok. It is going to be j plus hermitian conjugate is j minus. This is your diagonal generator we call it as j 3. This last column the first column you all know. I am now making an extrapolation to the second which is the first non-trivial one and then generalizing it to arbitrary S u n which will have n minus 1 diagonal generators and how the weight becomes a weight vector with how many components? n minus 1 components is that clear? So, then out of this is what you did this to show that it is 0. So, here you have 2 diagonal generators the remaining of diagonal are 6 of them right. So, let me call them as lambda 1 plus or minus i lambda 2 to be e plus or minus I will explain what this means alpha 1 vector you will have e plus or minus alpha 2 vector may be which is lambda 4 plus or minus i lambda 5. I will explain what this alpha 1 is alpha 1 will be like your here technically I should put a plus 1 by that what I mean is that it will shift the m by m plus 1 units. Similarly, this alpha 1 vector will shift the weight vector by alpha 1 that is the meaning of it is that clear? So, we will do this elaborately and what else I have left there is one more alpha 3 which will be lambda 6 plus or minus i lambda 7. So, the remaining 6 of diagonal generators can be grouped into Hermitian pairs Hermitian conjugate pairs in this fashion where these alpha 1s are again 2 component vectors because the mu vector is 2 component the rank of the group is 2. So, you will have 2 components and what its job is that if you do e alpha 1 plus or minus if you operate it on mu it has to give you I should not say equal to there will be some proportionality constants and all it will just raise the weight vector by alpha units or decrease it by alpha units depending upon whether you do a lambda 1 plus i lambda 2 or lambda 1 minus i lambda 2. Now, tell me what will be the analog of J which I call it as lambda here what is the requirement e plus alpha i on mu vector if it is 0 for all i equal to 1, 2 and 3. So, let me call that with a subscript h just like here m equal to half if you do there is only one raising operator in SU 2 that annihilates it, but there are 3 raising operators lambda 1 plus i lambda 2 lambda 4 plus i lambda 5 and lambda 6 plus i lambda 7 all of them when it hits some weight vector it gives you 0 then you call this weight as highest weight highest weight vector and that is what you are denoting it by lambda vector just like here m equal to half is what you denoted by J. If this happens then you call that weight vector to be lambda vector and then you put in the state to be lambda to keep track that that is the highest weight vector these are can be lower weight vectors which are obtained by you know the lowering operators same thing holds there I am not going to do the SU 4, but I guess from these 2 you understand what I am trying to tell. So, this diagonal generators algebra is called as cartons subalgebra these things I have already explained in the slides this is called cartons subalgebra and this cartons subalgebra rank is 2 there are 2 generators this one will be 3 because there are 3 generators right i and j 1 2 and 3 for SU 4 is this clear now the extrapolation of a magnetic quantum number which you are all very familiar now gets promoted to a weight vector in the Lie algebra language which will be 2 component for SU 3 specifically for the defining representation or the lowest non-trivial representation of SU 3 you have you can write these 3 bases lambda now I will explain what that lambda is and mu 1 turns out to be explicitly if I write those apply these matrices on those 2 states on those bases 1 0 0 if you apply the H 1 will give you half H 2 will give you root 3 by 6 this we discussed last time. So, that will be the weight vector corresponding to 1 0 0 similarly the weight vector 0 1 0 you can that is the basis on with the matrices operate the corresponding magnetic quantum number which creates promoted to weight vector will be the 2 eigenvalues of the diagonal generators. So, that value is this ok. So, this is the way you construct the weight vectors the 3 weight vectors corresponding to the lowest non-trivial dimension of SU 3 which is called the defining representation you can plot the weights these are the 3 weight vector states corresponding to the 3 basis states which you are looking at. So, this is the plot of H 1 H 2 the corresponding values which are like the weight vectors associated with basis 1 0 0 this will be with basis 0 1 0 and 0 0 1 has this as the weight vector and this also I have explained here may be the subscripts 1 2 and 3 are immaterial ok. So, that is just to keep track that they are 3 different 2 component vectors and normalizations also are some things which is uniformly followed with what I did for the SU 2. So, you can see that there are 3 raising and 3 lowering operators ok. If one more slight subtlety here the here the lowering operator and raising operator are slightly interchange here, but that does not matter this is the definition of the 3 operators which will raise by alpha 3 here it will decrease it by alpha 2 ok. So, because the E minus alpha 3 on mu 1 will reduce mu 1 by alpha 3 this state you know that when m equal to half if I do the j equal to it come back to the board. So, if you do j minus on m equal to half you get its proportional to m equal to minus half. Here the weight diagram is only the H 1 plane you start with the state which corresponds to m equal to half the ladder operation to q 2 m equal to minus half. These are the 2 states which defines the irrep of SU 2. It is a 2 dimensional vector space and the 2 states whose weight values are m equal to half and m equal to minus half can be achieved by the lowering operator. You apply a lowering operator on this you can get to this if you do a raising operator on this you can get to this. Same thing here you have a H 1 H 2 plane there are 3 states here 1, 2 and a 3 ok. The raising and lowering operators should be such that if you take the weight vector mu here and suppose this direction I call it as alpha vector 3 it has to give me. So, this is mu 1 let us say then this will be mu 2 and mu 2 has to be just like here it has to decrease this by 1 unit. Similarly, this one has to decrease mu 1 vector by alpha 3 vector units ok. So, you have to make sure that mu 2 vector has to be mu 1 vector minus alpha 3 vector is that clear? So, that is the way the shift will happen, but it should give me the 3 weight states which I found by using the Gelman matrices. Because I know mu 1 and I know mu 2 I can determine alpha 3. These two points I know the difference between these two points mu 2 minus mu 1 or mu 1 minus mu 2 will give me alpha 3. So, alpha 3 minus mu 2 whatever happened here is a one dimensional weight diagram this becomes a two dimensional weight diagram with specifically I am looking at the state and I am doing a lowering operator. When I do a lowering operator I know that the weight vector has to shift by this, but I also know the weight vector for the state. It has to be one of the states because we have explicitly written down by the defining representation acting on 1 0 0 0 1 0 0 0 1 and from there you can deduce what is alpha 3. Alpha 3 will be the difference between two weight vectors similarly for alpha 1 and alpha 2 ok. So, this is what I am showing it on the screen again on the slides E minus alpha 3 should be on mu 1 should be proportional to mu 1 minus alpha 3 and it should be equal to mu 2 where alpha 3 is mu 1 minus mu 2 and this is what we call it as a root vector ok. The difference between the two states this is one state and the state the difference between any two states the weight vector corresponding to that is what we call it as a root vector. So, alpha 1 so, this way you can find out alpha 1 is mu 1 minus mu 3. What is that direction? This will correspond to alpha 1 and there is also one more direction which will correspond to alpha 2. In this case you had only one direction you had only one direction and the difference is only plus 1 plus or minus 1 ok. So, if I am doing it from here it is plus 1 here if I am doing it is minus 1. So, I should even not even put the sign I should just say alpha vector is plus or minus 1. Here alpha 3 will be the difference mu 1 minus mu 2 would be plus or minus depending on the direction alpha 2 is for this alpha 3 is for this and they come naturally here by combining the generators. So, you will have how many root vectors? Root vectors will be the number of off diagonal generators, number of off diagonal generators is 2 here and that 2 should be Hermitian conjugate of each other one is raising by plus 1 another one is reducing by plus 1 right. So, it is J plus J plus 1 is one root and you can have a negative of it which you can call it as a negative root J minus 1 is a another that will be one positive root and one negative root for SU 2. For SU 3 how many roots are there? Six are there, but 3 are positive and 3 are negative. So, how will you find in general also total number of generators minus the rank of the algebra and then that should always be an even number and you have to divide it by 2 to say that those are the number of positive roots, root vectors.